CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide (2nd Edition) (Certification Guide) [2 ed.] 9780136747161

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide is a best-of-breed exam study guide. Expert technology instruct

9,550 1,554 32MB

English Pages 560 [1244] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide (2nd Edition) (Certification Guide) [2 ed.]
 9780136747161

Table of contents :
About This eBook
Title Page
Copyright Page
Contents at a Glance
Table of Contents
About the Author
Dedication
Acknowledgments
About the Technical Reviewer
We Want to Hear from You!
Reader Services
Introduction
Goals and Methods
Who Should Read This Book?
Strategies for Exam Preparation
How the Book Is Organized
Book Features
What’s New?
The Companion Website for Online Content Review
How to Access the Pearson Test Prep Practice Test Software
Customizing Your Exams
Credits
Chapter 1 The Importance of Threat Data and Intelligence
“Do I Know This Already?” Quiz
Foundation Topics
Intelligence Sources
Indicator Management
Threat Classification
Threat Actors
Intelligence Cycle
Commodity Malware
Information Sharing and Analysis Communities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 2 Utilizing Threat Intelligence to Support Organizational Security
“Do I Know This Already?” Quiz
Foundation Topics
Attack Frameworks
Threat Research
Threat Modeling Methodologies
Threat Intelligence Sharing with Supported Functions
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 3 Vulnerability Management Activities
“Do I Know This Already?” Quiz
Foundation Topics
Vulnerability Identification
Validation
Remediation/Mitigation
Scanning Parameters and Criteria
Inhibitors to Remediation
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 4 Analyzing Assessment Output
“Do I Know This Already?” Quiz
Foundation Topics
Web Application Scanner
Infrastructure Vulnerability Scanner
Software Assessment Tools and Techniques
Enumeration
Wireless Assessment Tools
Cloud Infrastructure Assessment Tools
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 5 Threats and Vulnerabilities Associated with Specialized Technology
“Do I Know This Already?” Quiz
Foundation Topics
Mobile
Internet of Things (IoT)
Embedded Systems
Real-Time Operating System (RTOS)
System-on-Chip (SoC)
Field Programmable Gate Array (FPGA)
Physical Access Control
Building Automation Systems
Vehicles and Drones
Workflow and Process Automation Systems
Incident Command System (ICS)
Supervisory Control and Data Acquisition (SCADA)
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 6 Threats and Vulnerabilities Associated with Operating in the Cloud
“Do I Know This Already?” Quiz
Foundation Topics
Cloud Deployment Models
Cloud Service Models
Function as a Service (FaaS)/Serverless Architecture
Infrastructure as Code (IaC)
Insecure Application Programming Interface (API)
Improper Key Management
Unprotected Storage
Logging and Monitoring
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 7 Implementing Controls to Mitigate Attacks and Software Vulnerabilities
“Do I Know This Already?” Quiz
Foundation Topics
Attack Types
Vulnerabilities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 8 Security Solutions for Infrastructure Management
“Do I Know This Already?” Quiz
Foundation Topics
Cloud vs. On-premises
Asset Management
Segmentation
Network Architecture
Change Management
Virtualization
Containerization
Identity and Access Management
Cloud Access Security Broker (CASB)
Honeypot
Monitoring and Logging
Encryption
Certificate Management
Active Defense
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 9 Software Assurance Best Practices
“Do I Know This Already?” Quiz
Foundation Topics
Platforms
Software Development Life Cycle (SDLC) Integration
DevSecOps
Software Assessment Methods
Secure Coding Best Practices
Static Analysis Tools
Dynamic Analysis Tools
Formal Methods for Verification of Critical Software
Service-Oriented Architecture
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 10 Hardware Assurance Best Practices
“Do I Know This Already?” Quiz
Foundation Topics
Hardware Root of Trust
eFuse
Unified Extensible Firmware Interface (UEFI)
Trusted Foundry
Secure Processing
Anti-Tamper
Self-Encrypting Drives
Trusted Firmware Updates
Measured Boot and Attestation
Bus Encryption
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 11 Analyzing Data as Part of Security Monitoring Activities
“Do I Know This Already?” Quiz
Foundation Topics
Heuristics
Trend Analysis
Endpoint
Network
Log Review
Impact Analysis
Security Information and Event Management (SIEM) Review
Query Writing
E-mail Analysis
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 12 Implementing Configuration Changes to Existing Controls to Improve Security
“Do I Know This Already?” Quiz
Foundation Topics
Permissions
Whitelisting and Blacklisting
Firewall
Intrusion Prevention System (IPS) Rules
Data Loss Prevention (DLP)
Endpoint Detection and Response (EDR)
Network Access Control (NAC)
Sinkholing
Malware Signatures
Sandboxing
Port Security
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 13 The Importance of Proactive Threat Hunting
“Do I Know This Already?” Quiz
Foundation Topics
Establishing a Hypothesis
Profiling Threat Actors and Activities
Threat Hunting Tactics
Reducing the Attack Surface Area
Bundling Critical Assets
Attack Vectors
Integrated Intelligence
Improving Detection Capabilities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 14 Automation Concepts and Technologies
“Do I Know This Already?” Quiz
Foundation Topics
Workflow Orchestration
Scripting
Application Programming Interface (API) Integration
Automated Malware Signature Creation
Data Enrichment
Threat Feed Combination
Machine Learning
Use of Automation Protocols and Standards
Continuous Integration
Continuous Deployment/Delivery
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 15 The Incident Response Process
“Do I Know This Already?” Quiz
Foundation Topics
Communication Plan
Response Coordination with Relevant Entities
Factors Contributing to Data Criticality
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 16 Applying the Appropriate Incident Response Procedure
“Do I Know This Already?” Quiz
Foundation Topics
Preparation
Detection and Analysis
Containment
Eradication and Recovery
Post-Incident Activities
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 17 Analyzing Potential Indicators of Compromise
“Do I Know This Already?” Quiz
Foundation Topics
Network-Related Indicators of Compromise
Host-Related Indicators of Compromise
Application-Related Indicators of Compromise
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 18 Utilizing Basic Digital Forensics Techniques
“Do I Know This Already?” Quiz
Foundation Topics
Network
Endpoint
Mobile
Cloud
Virtualization
Legal Hold
Procedures
Hashing
Carving
Data Acquisition
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 19 The Importance of Data Privacy and Protection
“Do I Know This Already?” Quiz
Foundation Topics
Privacy vs. Security
Non-technical Controls
Technical Controls
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 20 Applying Security Concepts in Support of Organizational Risk Mitigation
“Do I Know This Already?” Quiz
Foundation Topics
Business Impact Analysis
Risk Identification Process
Risk Calculation
Communication of Risk Factors
Risk Prioritization
Systems Assessment
Documented Compensating Controls
Training and Exercises
Supply Chain Assessment
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 21 The Importance of Frameworks, Policies, Procedures, and Controls
“Do I Know This Already?” Quiz
Foundation Topics
Frameworks
Policies and Procedures
Category
Control Type
Audits and Assessments
Exam Preparation Tasks
Review All Key Topics
Define Key Terms
Review Questions
Chapter 22 Final Preparation
Exam Information
Getting Ready
Tools for Final Preparation
Suggested Plan for Final Review/Study
Summary
Appendix A Answers to the “Do I Know This Already?” Quizzes and Review Questions
Appendix B CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Exam Updates
Always Get the Latest at the Book’s Product Page
Technical Content
Glossary of Key Terms
Index
Appendix C Memory Tables
Appendix D Memory Tables Answer Key
Appendix E Study Planner
Where are the companion content files? - Register
Inside Front Cover
Inside Back Cover
Code Snippets

Citation preview

Contents 1. Cover Page 2. About This eBook 3. Title Page 4. Copyright Page 5. Contents at a Glance 6. Table of Contents 7. About the Author 8. Dedication 9. Acknowledgments 10. About the Technical Reviewer 11. We Want to Hear from You! 12. Reader Services 13. Introduction 1. Goals and Methods 2. Who Should Read This Book? 3. Strategies for Exam Preparation 4. How the Book Is Organized 5. Book Features 6. What’s New? 7. The Companion Website for Online Content Review 8. How to Access the Pearson Test Prep Practice Test Software 9. Customizing Your Exams 14. Credits 15. Chapter 1 The Importance of Threat Data and Intelligence 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Intelligence Sources

4. Indicator Management 5. Threat Classification 6. Threat Actors 7. Intelligence Cycle 8. Commodity Malware 9. Information Sharing and Analysis Communities 10. Exam Preparation Tasks 11. Review All Key Topics 12. Define Key Terms 13. Review Questions 16. Chapter 2 Utilizing Threat Intelligence to Support Organizational Security 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Attack Frameworks 4. Threat Research 5. Threat Modeling Methodologies 6. Threat Intelligence Sharing with Supported Functions 7. Exam Preparation Tasks 8. Review All Key Topics 9. Define Key Terms 10. Review Questions 17. Chapter 3 Vulnerability Management Activities 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Vulnerability Identification 4. Validation 5. Remediation/Mitigation 6. Scanning Parameters and Criteria 7. Inhibitors to Remediation 8. Exam Preparation Tasks 9. Review All Key Topics

10. Define Key Terms 11. Review Questions 18. Chapter 4 Analyzing Assessment Output 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Web Application Scanner 4. Infrastructure Vulnerability Scanner 5. Software Assessment Tools and Techniques 6. Enumeration 7. Wireless Assessment Tools 8. Cloud Infrastructure Assessment Tools 9. Exam Preparation Tasks 10. Review All Key Topics 11. Define Key Terms 12. Review Questions 19. Chapter 5 Threats and Vulnerabilities Associated with Specialized Technology 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Mobile 4. Internet of Things (IoT) 5. Embedded Systems 6. Real-Time Operating System (RTOS) 7. System-on-Chip (SoC) 8. Field Programmable Gate Array (FPGA) 9. Physical Access Control 10. Building Automation Systems 11. Vehicles and Drones 12. Workflow and Process Automation Systems 13. Incident Command System (ICS) 14. Supervisory Control and Data Acquisition (SCADA) 15. Exam Preparation Tasks

16. Review All Key Topics 17. Define Key Terms 18. Review Questions 20. Chapter 6 Threats and Vulnerabilities Associated with Operating in the Cloud 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Cloud Deployment Models 4. Cloud Service Models 5. Function as a Service (FaaS)/Serverless Architecture 6. Infrastructure as Code (IaC) 7. Insecure Application Programming Interface (API) 8. Improper Key Management 9. Unprotected Storage 10. Logging and Monitoring 11. Exam Preparation Tasks 12. Review All Key Topics 13. Define Key Terms 14. Review Questions 21. Chapter 7 Implementing Controls to Mitigate Attacks and Software Vulnerabilities 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Attack Types 4. Vulnerabilities 5. Exam Preparation Tasks 6. Review All Key Topics 7. Define Key Terms 8. Review Questions 22. Chapter 8 Security Solutions for Infrastructure Management 1. “Do I Know This Already?” Quiz 2. Foundation Topics

3. Cloud vs. On-premises 4. Asset Management 5. Segmentation 6. Network Architecture 7. Change Management 8. Virtualization 9. Containerization 10. Identity and Access Management 11. Cloud Access Security Broker (CASB) 12. Honeypot 13. Monitoring and Logging 14. Encryption 15. Certificate Management 16. Active Defense 17. Exam Preparation Tasks 18. Review All Key Topics 19. Define Key Terms 20. Review Questions 23. Chapter 9 Software Assurance Best Practices 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Platforms 4. Software Development Life Cycle (SDLC) Integration 5. DevSecOps 6. Software Assessment Methods 7. Secure Coding Best Practices 8. Static Analysis Tools 9. Dynamic Analysis Tools 10. Formal Methods for Verification of Critical Software 11. Service-Oriented Architecture 12. Exam Preparation Tasks 13. Review All Key Topics

14. Define Key Terms 15. Review Questions 24. Chapter 10 Hardware Assurance Best Practices 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Hardware Root of Trust 4. eFuse 5. Unified Extensible Firmware Interface (UEFI) 6. Trusted Foundry 7. Secure Processing 8. Anti-Tamper 9. Self-Encrypting Drives 10. Trusted Firmware Updates 11. Measured Boot and Attestation 12. Bus Encryption 13. Exam Preparation Tasks 14. Review All Key Topics 15. Define Key Terms 16. Review Questions 25. Chapter 11 Analyzing Data as Part of Security Monitoring Activities 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Heuristics 4. Trend Analysis 5. Endpoint 6. Network 7. Log Review 8. Impact Analysis 9. Security Information and Event Management (SIEM) Review 10. Query Writing 11. E-mail Analysis 12. Exam Preparation Tasks

13. Review All Key Topics 14. Define Key Terms 15. Review Questions 26. Chapter 12 Implementing Configuration Changes to Existing Controls to Improve Security 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Permissions 4. Whitelisting and Blacklisting 5. Firewall 6. Intrusion Prevention System (IPS) Rules 7. Data Loss Prevention (DLP) 8. Endpoint Detection and Response (EDR) 9. Network Access Control (NAC) 10. Sinkholing 11. Malware Signatures 12. Sandboxing 13. Port Security 14. Exam Preparation Tasks 15. Review All Key Topics 16. Define Key Terms 17. Review Questions 27. Chapter 13 The Importance of Proactive Threat Hunting 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Establishing a Hypothesis 4. Profiling Threat Actors and Activities 5. Threat Hunting Tactics 6. Reducing the Attack Surface Area 7. Bundling Critical Assets 8. Attack Vectors 9. Integrated Intelligence

10. Improving Detection Capabilities 11. Exam Preparation Tasks 12. Review All Key Topics 13. Define Key Terms 14. Review Questions 28. Chapter 14 Automation Concepts and Technologies 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Workflow Orchestration 4. Scripting 5. Application Programming Interface (API) Integration 6. Automated Malware Signature Creation 7. Data Enrichment 8. Threat Feed Combination 9. Machine Learning 10. Use of Automation Protocols and Standards 11. Continuous Integration 12. Continuous Deployment/Delivery 13. Exam Preparation Tasks 14. Review All Key Topics 15. Define Key Terms 16. Review Questions 29. Chapter 15 The Incident Response Process 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Communication Plan 4. Response Coordination with Relevant Entities 5. Factors Contributing to Data Criticality 6. Exam Preparation Tasks 7. Review All Key Topics 8. Define Key Terms 9. Review Questions

30. Chapter 16 Applying the Appropriate Incident Response Procedure 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Preparation 4. Detection and Analysis 5. Containment 6. Eradication and Recovery 7. Post-Incident Activities 8. Exam Preparation Tasks 9. Review All Key Topics 10. Define Key Terms 11. Review Questions 31. Chapter 17 Analyzing Potential Indicators of Compromise 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Network-Related Indicators of Compromise 4. Host-Related Indicators of Compromise 5. Application-Related Indicators of Compromise 6. Exam Preparation Tasks 7. Review All Key Topics 8. Define Key Terms 9. Review Questions 32. Chapter 18 Utilizing Basic Digital Forensics Techniques 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Network 4. Endpoint 5. Mobile 6. Cloud 7. Virtualization 8. Legal Hold 9. Procedures

10. Hashing 11. Carving 12. Data Acquisition 13. Exam Preparation Tasks 14. Review All Key Topics 15. Define Key Terms 16. Review Questions 33. Chapter 19 The Importance of Data Privacy and Protection 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Privacy vs. Security 4. Non-technical Controls 5. Technical Controls 6. Exam Preparation Tasks 7. Review All Key Topics 8. Define Key Terms 9. Review Questions 34. Chapter 20 Applying Security Concepts in Support of Organizational Risk Mitigation 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Business Impact Analysis 4. Risk Identification Process 5. Risk Calculation 6. Communication of Risk Factors 7. Risk Prioritization 8. Systems Assessment 9. Documented Compensating Controls 10. Training and Exercises 11. Supply Chain Assessment 12. Exam Preparation Tasks 13. Review All Key Topics

14. Define Key Terms 15. Review Questions 35. Chapter 21 The Importance of Frameworks, Policies, Procedures, and Controls 1. “Do I Know This Already?” Quiz 2. Foundation Topics 3. Frameworks 4. Policies and Procedures 5. Category 6. Control Type 7. Audits and Assessments 8. Exam Preparation Tasks 9. Review All Key Topics 10. Define Key Terms 11. Review Questions 36. Chapter 22 Final Preparation 1. Exam Information 2. Getting Ready 3. Tools for Final Preparation 4. Suggested Plan for Final Review/Study 5. Summary 37. Appendix A Answers to the “Do I Know This Already?” Quizzes and Review Questions 38. Appendix B CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Exam Updates 1. Always Get the Latest at the Book’s Product Page 2. Technical Content 39. Glossary of Key Terms 40. Index 41. Appendix C Memory Tables 42. Appendix D Memory Tables Answer Key 43. Appendix E Study Planner 44. Where are the companion content files? - Register

45. Inside Front Cover 46. Inside Back Cover 47. Code Snippets 1. i 2. ii 3. iii 4. iv 5. v 6. vi 7. vii 8. viii 9. ix 10. x 11. xi 12. xii 13. xiii 14. xiv 15. xv 16. xvi 17. xvii 18. xviii 19. xix 20. xx 21. xxi 22. xxii 23. xxiii 24. xxiv 25. xxv 26. xxvi 27. xxvii 28. xxviii 29. xxix

30. xxx 31. xxxi 32. xxxii 33. xxxiii 34. xxxiv 35. xxxv 36. xxxvi 37. xxxvii 38. xxxviii 39. xxxix 40. xl 41. xli 42. xlii 43. xliii 44. xliv 45. xlv 46. 2 47. 3 48. 4 49. 5 50. 6 51. 7 52. 8 53. 9 54. 10 55. 11 56. 12 57. 13 58. 14 59. 15 60. 16 61. 17

62. 18 63. 19 64. 20 65. 21 66. 22 67. 23 68. 24 69. 25 70. 26 71. 27 72. 28 73. 29 74. 30 75. 31 76. 32 77. 33 78. 34 79. 35 80. 36 81. 37 82. 38 83. 39 84. 40 85. 41 86. 42 87. 43 88. 44 89. 45 90. 46 91. 47 92. 48 93. 49

94. 50 95. 51 96. 52 97. 53 98. 54 99. 55 100. 56 101. 57 102. 58 103. 59 104. 60 105. 61 106. 62 107. 63 108. 64 109. 65 110. 66 111. 67 112. 68 113. 69 114. 70 115. 71 116. 72 117. 73 118. 74 119. 75 120. 76 121. 77 122. 78 123. 79 124. 80 125. 81

126. 82 127. 83 128. 84 129. 85 130. 86 131. 87 132. 88 133. 89 134. 90 135. 91 136. 92 137. 93 138. 94 139. 95 140. 96 141. 97 142. 98 143. 99 144. 100 145. 101 146. 102 147. 103 148. 104 149. 105 150. 106 151. 107 152. 108 153. 109 154. 110 155. 111 156. 112 157. 113

158. 114 159. 115 160. 116 161. 117 162. 118 163. 119 164. 120 165. 121 166. 122 167. 123 168. 124 169. 125 170. 126 171. 127 172. 128 173. 129 174. 130 175. 131 176. 132 177. 133 178. 134 179. 135 180. 136 181. 137 182. 138 183. 139 184. 140 185. 141 186. 142 187. 143 188. 144 189. 145

190. 146 191. 147 192. 148 193. 149 194. 150 195. 151 196. 152 197. 153 198. 154 199. 155 200. 156 201. 157 202. 158 203. 159 204. 160 205. 161 206. 162 207. 163 208. 164 209. 165 210. 166 211. 167 212. 168 213. 169 214. 170 215. 171 216. 172 217. 173 218. 174 219. 175 220. 176 221. 177

222. 178 223. 179 224. 180 225. 181 226. 182 227. 183 228. 184 229. 185 230. 186 231. 187 232. 188 233. 189 234. 190 235. 191 236. 192 237. 193 238. 194 239. 195 240. 196 241. 197 242. 198 243. 199 244. 200 245. 201 246. 202 247. 203 248. 204 249. 205 250. 206 251. 207 252. 208 253. 209

254. 210 255. 211 256. 212 257. 213 258. 214 259. 215 260. 216 261. 217 262. 218 263. 219 264. 220 265. 221 266. 222 267. 223 268. 224 269. 225 270. 226 271. 227 272. 228 273. 229 274. 230 275. 231 276. 232 277. 233 278. 234 279. 235 280. 236 281. 237 282. 238 283. 239 284. 240 285. 241

286. 242 287. 243 288. 244 289. 245 290. 246 291. 247 292. 248 293. 249 294. 250 295. 251 296. 252 297. 253 298. 254 299. 255 300. 256 301. 257 302. 258 303. 259 304. 260 305. 261 306. 262 307. 263 308. 264 309. 265 310. 266 311. 267 312. 268 313. 269 314. 270 315. 271 316. 272 317. 273

318. 274 319. 275 320. 276 321. 277 322. 278 323. 279 324. 280 325. 281 326. 282 327. 283 328. 284 329. 285 330. 286 331. 287 332. 288 333. 289 334. 290 335. 291 336. 292 337. 293 338. 294 339. 295 340. 296 341. 297 342. 298 343. 299 344. 300 345. 301 346. 302 347. 303 348. 304 349. 305

350. 306 351. 307 352. 308 353. 309 354. 310 355. 311 356. 312 357. 313 358. 314 359. 315 360. 316 361. 317 362. 318 363. 319 364. 320 365. 321 366. 322 367. 323 368. 324 369. 325 370. 326 371. 327 372. 328 373. 329 374. 330 375. 331 376. 332 377. 333 378. 334 379. 335 380. 336 381. 337

382. 338 383. 339 384. 340 385. 341 386. 342 387. 343 388. 344 389. 345 390. 346 391. 347 392. 348 393. 349 394. 350 395. 351 396. 352 397. 353 398. 354 399. 355 400. 356 401. 357 402. 358 403. 359 404. 360 405. 361 406. 362 407. 363 408. 364 409. 365 410. 366 411. 367 412. 368 413. 369

414. 370 415. 371 416. 372 417. 373 418. 374 419. 375 420. 376 421. 377 422. 378 423. 379 424. 380 425. 381 426. 382 427. 383 428. 384 429. 385 430. 386 431. 387 432. 388 433. 389 434. 390 435. 391 436. 392 437. 393 438. 394 439. 395 440. 396 441. 397 442. 398 443. 399 444. 400 445. 401

446. 402 447. 403 448. 404 449. 405 450. 406 451. 407 452. 408 453. 409 454. 410 455. 411 456. 412 457. 413 458. 414 459. 415 460. 416 461. 417 462. 418 463. 419 464. 420 465. 421 466. 422 467. 423 468. 424 469. 425 470. 426 471. 427 472. 428 473. 429 474. 430 475. 431 476. 432 477. 433

478. 434 479. 435 480. 436 481. 437 482. 438 483. 439 484. 440 485. 441 486. 442 487. 443 488. 444 489. 445 490. 446 491. 447 492. 448 493. 449 494. 450 495. 451 496. 452 497. 453 498. 454 499. 455 500. 456 501. 457 502. 458 503. 459 504. 460 505. 461 506. 462 507. 463 508. 464 509. 465

510. 466 511. 467 512. 468 513. 469 514. 470 515. 471 516. 472 517. 473 518. 474 519. 475 520. 476 521. 477 522. 478 523. 479 524. 480 525. 481 526. 482 527. 483 528. 484 529. 485 530. 486 531. 487 532. 488 533. 489 534. 490 535. 491 536. 492 537. 493 538. 494 539. 495 540. 496 541. 497

542. 498 543. 499 544. 500 545. 501 546. 502 547. 503 548. 504 549. 505 550. 506 551. 507 552. 508 553. 509 554. 510 555. 511 556. 512 557. 513 558. 514 559. 515 560. 516 561. 517 562. 518 563. 519 564. 520 565. 521 566. 522 567. 523 568. 524 569. 525 570. 526 571. 527 572. 528 573. 529

574. 530 575. 531 576. 532 577. 533 578. 534 579. 535 580. 536 581. 537 582. 538 583. 539 584. 540 585. 541 586. 542 587. 543 588. 544 589. 545 590. 546 591. 547 592. 548 593. 549 594. 550 595. 551 596. 552 597. 553 598. 554 599. 555 600. 556 601. 557 602. 558 603. 559 604. 560 605. 561

606. 562 607. 563 608. 564 609. 565 610. 566 611. 567 612. 568 613. 569 614. 570 615. 571 616. 572 617. 573 618. 574 619. 575 620. 576 621. 577 622. 578 623. 579 624. 580 625. 581 626. 582 627. 583 628. 584 629. 585 630. 586 631. 587 632. 588 633. 589 634. 590 635. 591 636. 592 637. 593

638. 594 639. 595 640. 596 641. 597 642. 598 643. 599 644. 600 645. 601 646. 602 647. 603 648. 604 649. 605 650. 606 651. 607 652. 608 653. 609 654. 610 655. 611 656. 612 657. 613 658. 614 659. 615 660. 616 661. 617 662. 618 663. 619 664. 620 665. 621 666. 622 667. 623 668. 624 669. 625

670. 626 671. 627 672. 628 673. 629 674. 630 675. 631 676. 632 677. 633 678. 634 679. 635 680. 636 681. 637 682. 638 683. 639 684. 640 685. 641 686. 642 687. 643 688. 644 689. 645 690. 646 691. 647 692. 648 693. 649 694. 651 695. 652 696. 653 697. 654 698. 655 699. 656 700. 657 701. 658

702. 659 703. 660 704. 661 705. 662 706. 663 707. 664 708. 665 709. 666 710. 667 711. 668 712. 669 713. 670 714. 671 715. 672 716. 673 717. 674 718. 675 719. 676 720. 677 721. 678 722. 679 723. 680 724. 681 725. 682 726. 683 727. 684 728. 685 729. 686 730. 687 731. 688 732. 689 733. 690

734. 691 735. 692 736. 693 737. 694 738. 695 739. 696 740. 697 741. 698 742. 699 743. 700 744. 701 745. 702 746. 703 747. 704 748. 705 749. 706 750. 707 751. 708 752. 709 753. 710 754. 711 755. 712 756. 713 757. 714 758. 715 759. 716 760. 717 761. 718 762. 719 763. 720 764. 721 765. 722

766. 723 767. 724 768. 725 769. 726 770. 727 771. 728 772. 729 773. 730 774. 731 775. 732 776. 733 777. 734 778. 735 779. 736 780. 737 781. 738 782. 739 783. 740 784. C-1 785. C-2 786. C-3 787. C-4 788. C-5 789. C-6 790. C-7 791. C-8 792. C-9 793. C-10 794. D-1 795. D-2 796. D-3 797. D-4

798. D-5 799. D-6 800. D-7 801. D-8 802. D-9 803. D-10 804. E-1

About This eBook ePUB is an open, industry-standard format for eBooks. However, support of ePUB and its many features varies across reading devices and applications. Use your device or app settings to customize the presentation to your liking. Settings that you can customize often include font, font size, single or double column, landscape or portrait mode, and figures that you can click or tap to enlarge. For additional information about the settings and features on your reading device or app, visit the device manufacturer’s Web site. Many titles include programming code or configuration examples. To optimize the presentation of these elements, view the eBook in single-column, landscape mode and adjust the font size to the smallest setting. In addition to presenting code and configurations in the reflowable text format, we have included images of the code that mimic the presentation found in the print book; therefore, where the reflowable format may compromise the presentation of the code listing, you will see a “Click here to view code image” link. Click the link to view the print-fidelity code image. To return to the previous page viewed, click the Back button on your device or app.

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide

Troy McMillan

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Copyright © 2021 by Pearson Education, Inc. Hoboken, New Jersey All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying, recording, or otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained herein. Although every precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use of the information contained herein. ISBN-13: 978-0-13-674716-1 ISBN-10: 0-13-674716-7 Library of Congress Control Number: 2020941742 ScoutAutomatedPrintCode

Trademarks All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Pearson IT Certification cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

Warning and Disclaimer Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied.

The information provided is on an “as is” basis. The author and the publisher shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book.

Special Sales For information about buying this title in bulk quantities, or for special sales opportunities (which may include electronic versions; custom cover designs; and content particular to your business, training goals, marketing focus, or branding interests), please contact our corporate sales department at [email protected] or (800) 382-3419. For government sales inquiries, please contact [email protected]. For questions about sales outside the U.S., please contact [email protected]. Editor-in-Chief Mark Taub Product Line Manager Brett Bartow Executive Editor Nancy Davis Development Editor Christopher Cleveland Managing Editor Sandra Schroeder Senior Project Editor Tonya Simpson Copy Editor Bill McManus Indexer

Erika Millen Proofreader Abigail Manheim Technical Editor Chris Crayton Editorial Assistant Cindy Teeters Cover Designer Chuti Prasertsith Compositor codeMantra

Contents at a Glance Introduction CHAPTER 1 The Importance of Threat Data and Intelligence CHAPTER 2 Utilizing Threat Intelligence to Support Organizational Security CHAPTER 3 Vulnerability Management Activities CHAPTER 4 Analyzing Assessment Output CHAPTER 5 Threats and Vulnerabilities Associated with Specialized Technology CHAPTER 6 Threats and Vulnerabilities Associated with Operating in the Cloud CHAPTER 7 Implementing Controls to Mitigate Attacks and Software Vulnerabilities CHAPTER 8 Security Solutions for Infrastructure Management CHAPTER 9 Software Assurance Best Practices CHAPTER 10 Hardware Assurance Best Practices CHAPTER 11 Analyzing Data as Part of Security Monitoring Activities CHAPTER 12 Implementing Configuration Changes to Existing Controls to Improve Security CHAPTER 13 The Importance of Proactive Threat Hunting CHAPTER 14 Automation Concepts and Technologies CHAPTER 15 The Incident Response Process

CHAPTER 16 Applying the Appropriate Incident Response Procedure CHAPTER 17 Analyzing Potential Indicators of Compromise CHAPTER 18 Utilizing Basic Digital Forensics Techniques CHAPTER 19 The Importance of Data Privacy and Protection CHAPTER 20 Applying Security Concepts in Support of Organizational Risk Mitigation CHAPTER 21 The Importance of Frameworks, Policies, Procedures, and Controls CHAPTER 22 Final Preparation APPENDIX A Answers to the “Do I Know This Already?” Quizzes and Review Questions APPENDIX B CompTIA Cybersecurity Analyst (CySA+) CS0002 Cert Guide Exam Updates Glossary of Key Terms Index Online Elements: APPENDIX C Memory Tables APPENDIX D Memory Tables Answer Key APPENDIX E Study Planner Glossary of Key Terms

Table of Contents Introduction Chapter 1 The Importance of Threat Data and Intelligence “Do I Know This Already?” Quiz Foundation Topics Intelligence Sources Open-Source Intelligence Proprietary/Closed-Source Intelligence Timeliness Relevancy Confidence Levels Accuracy Indicator Management Structured Threat Information eXpression (STIX) Trusted Automated eXchange of Indicator Information (TAXII) OpenIOC Threat Classification Known Threat vs. Unknown Threat Zero-day Advanced Persistent Threat Threat Actors Nation-state Organized Crime Terrorist Groups Hacktivist

Insider Threat Intentional Unintentional Intelligence Cycle Commodity Malware Information Sharing and Analysis Communities Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 2 Utilizing Threat Intelligence to Support Organizational Security “Do I Know This Already?” Quiz Foundation Topics Attack Frameworks MITRE ATT&CK The Diamond Model of Intrusion Analysis Kill Chain Threat Research Reputational Behavioral Indicator of Compromise (IoC) Common Vulnerability Scoring System (CVSS) Threat Modeling Methodologies Adversary Capability Total Attack Surface Attack Vector Impact Probability

Threat Intelligence Sharing with Supported Functions Incident Response Vulnerability Management Risk Management Security Engineering Detection and Monitoring Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 3 Vulnerability Management Activities “Do I Know This Already?” Quiz Foundation Topics Vulnerability Identification Asset Criticality Active vs. Passive Scanning Mapping/Enumeration Validation Remediation/Mitigation Configuration Baseline Patching Hardening Compensating Controls Risk Acceptance Verification of Mitigation Scanning Parameters and Criteria Risks Associated with Scanning Activities Vulnerability Feed Scope

Credentialed vs. Non-credentialed Server-based vs. Agent-based Internal vs. External Special Considerations Types of Data Technical Constraints Workflow Sensitivity Levels Regulatory Requirements Segmentation Intrusion Prevention System (IPS), Intrusion Detection System (IDS), and Firewall Settings Firewall Inhibitors to Remediation Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 4 Analyzing Assessment Output “Do I Know This Already?” Quiz Foundation Topics Web Application Scanner Burp Suite OWASP Zed Attack Proxy (ZAP) Nikto Arachni Infrastructure Vulnerability Scanner Nessus OpenVAS

Software Assessment Tools and Techniques Static Analysis Dynamic Analysis Reverse Engineering Fuzzing Enumeration Nmap Host Scanning hping Active vs. Passive Responder Wireless Assessment Tools Aircrack-ng Reaver oclHashcat Cloud Infrastructure Assessment Tools ScoutSuite Prowler Pacu Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 5 Threats and Vulnerabilities Associated with Specialized Technology “Do I Know This Already?” Quiz Foundation Topics Mobile Unsigned Apps/System Apps Security Implications/Privacy Concerns

Data Storage Nonremovable Storage Removable Storage Transfer/Back Up Data to Uncontrolled Storage USB OTG Device Loss/Theft Rooting/Jailbreaking Push Notification Services Geotagging OEM/Carrier Android Fragmentation Mobile Payment NFC Enabled Inductance Enabled Mobile Wallet Peripheral-Enabled Payments (Credit Card Reader) USB Malware Unauthorized Domain Bridging SMS/MMS/Messaging Internet of Things (IoT) IoT Examples Methods of Securing IoT Devices Embedded Systems Real-Time Operating System (RTOS) System-on-Chip (SoC) Field Programmable Gate Array (FPGA) Physical Access Control Systems

Devices Facilities Building Automation Systems IP Video HVAC Controllers Sensors Vehicles and Drones CAN Bus Drones Workflow and Process Automation Systems Incident Command System (ICS) Supervisory Control and Data Acquisition (SCADA) Modbus Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 6 Threats and Vulnerabilities Associated with Operating in the Cloud “Do I Know This Already?” Quiz Foundation Topics Cloud Deployment Models Cloud Service Models Function as a Service (FaaS)/Serverless Architecture Infrastructure as Code (IaC) Insecure Application Programming Interface (API) Improper Key Management Key Escrow Key Stretching Unprotected Storage

Transfer/Back Up Data to Uncontrolled Storage Big Data Logging and Monitoring Insufficient Logging and Monitoring Inability to Access Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 7 Implementing Controls to Mitigate Attacks and Software Vulnerabilities “Do I Know This Already?” Quiz Foundation Topics Attack Types Extensible Markup Language (XML) Attack Structured Query Language (SQL) Injection Overflow Attacks Buffer Integer Overflow Heap Remote Code Execution Directory Traversal Privilege Escalation Password Spraying Credential Stuffing Impersonation Man-in-the-Middle Attack VLAN-based Attacks Session Hijacking

Rootkit Cross-Site Scripting Reflected Persistent Document Object Model (DOM) Vulnerabilities Improper Error Handling Dereferencing Insecure Object Reference Race Condition Broken Authentication Sensitive Data Exposure Insecure Components Code Reuse Insufficient Logging and Monitoring Weak or Default Configurations Use of Insecure Functions strcpy Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 8 Security Solutions for Infrastructure Management “Do I Know This Already?” Quiz Foundation Topics Cloud vs. On-premises Cloud Mitigations Asset Management

Asset Tagging Device-Tracking Technologies Geolocation/GPS Location Object-Tracking and Object-Containment Technologies Geotagging/Geofencing RFID Segmentation Physical LAN Intranet Extranet DMZ Virtual Jumpbox System Isolation Air Gap Network Architecture Physical Firewall Architecture Software-Defined Networking Virtual SAN Virtual Private Cloud (VPC) Virtual Private Network (VPN) IPsec SSL/TLS Serverless Change Management Virtualization

Security Advantages and Disadvantages of Virtualization Type 1 vs. Type 2 Hypervisors Virtualization Attacks and Vulnerabilities Virtual Networks Management Interface Vulnerabilities Associated with a Single Physical Server Hosting Multiple Companies’ Virtual Machines Vulnerabilities Associated with a Single Platform Hosting Multiple Companies’ Virtual Machines Virtual Desktop Infrastructure (VDI) Terminal Services/Application Delivery Services Containerization Identity and Access Management Identify Resources Identify Users Identify Relationships Between Resources and Users Privilege Management Multifactor Authentication (MFA) Authentication Authentication Factors Knowledge Factors Ownership Factors Characteristic Factors Single Sign-On (SSO) Kerberos Active Directory SESAME

Federation XACML SPML SAML OpenID Shibboleth Role-Based Access Control Attribute-Based Access Control Mandatory Access Control Manual Review Cloud Access Security Broker (CASB) Honeypot Monitoring and Logging Log Management Audit Reduction Tools NIST SP 800-137 Encryption Cryptographic Types Symmetric Algorithms Asymmetric Algorithms Hybrid Encryption Hashing Functions One-way Hash Message Digest Algorithm Secure Hash Algorithm Transport Encryption SSL/TLS HTTP/HTTPS/SHTTP SSH

IPsec Certificate Management Certificate Authority and Registration Authority Certificates Certificate Revocation List OCSP PKI Steps Cross-Certification Digital Signatures Active Defense Hunt Teaming Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 9 Software Assurance Best Practices “Do I Know This Already?” Quiz Foundation Topics Platforms Mobile Containerization Configuration Profiles and Payloads Personally Owned, Corporate Enabled Corporate-Owned, Personally Enabled Application Wrapping Application, Content, and Data Management Remote Wiping SCEP NIST SP 800-163 Rev 1

Web Application Maintenance Hooks Time-of-Check/Time-of-Use Attacks Cross-Site Request Forgery (CSRF) Click-Jacking Client/Server Embedded Hardware/Embedded Device Analysis System-on-Chip (SoC) Secure Booting Central Security Breach Response Firmware Software Development Life Cycle (SDLC) Integration Step 1: Plan/Initiate Project Step 2: Gather Requirements Step 3: Design Step 4: Develop Step 5: Test/Validate Step 6: Release/Maintain Step 7: Certify/Accredit Step 8: Change Management and Configuration Management/Replacement DevSecOps DevOps Software Assessment Methods User Acceptance Testing Stress Test Application Security Regression Testing Code Review

Security Testing Code Review Process Secure Coding Best Practices Input Validation Output Encoding Session Management Authentication Context-based Authentication Network Authentication Methods IEEE 802.1X Biometric Considerations Certificate-Based Authentication Data Protection Parameterized Queries Static Analysis Tools Dynamic Analysis Tools Formal Methods for Verification of Critical Software Service-Oriented Architecture Security Assertions Markup Language (SAML) Simple Object Access Protocol (SOAP) Representational State Transfer (REST) Microservices Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 10 Hardware Assurance Best Practices “Do I Know This Already?” Quiz Foundation Topics

Hardware Root of Trust Trusted Platform Module (TPM) Virtual TPM Hardware Security Module (HSM) MicroSD HSM eFuse Unified Extensible Firmware Interface (UEFI) Trusted Foundry Secure Processing Trusted Execution Secure Enclave Processor Security Extensions Atomic Execution Anti-Tamper Self-Encrypting Drives Trusted Firmware Updates Measured Boot and Attestation Measured Launch Integrity Measurement Architecture Bus Encryption Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 11 Analyzing Data as Part of Security Monitoring Activities “Do I Know This Already?” Quiz Foundation Topics Heuristics Trend Analysis

Endpoint Malware Virus Worm Trojan Horse Logic Bomb Spyware/Adware Botnet Rootkit Ransomware Reverse Engineering Memory Memory Protection Secured Memory Runtime Data Integrity Check Memory Dumping, Runtime Debugging System and Application Behavior Known-good Behavior Anomalous Behavior Exploit Techniques File System File Integrity Monitoring User and Entity Behavior Analytics (UEBA) Network Uniform Resource Locator (URL) and Domain Name System (DNS) Analysis DNS Analysis Domain Generation Algorithm

Flow Analysis NetFlow Analysis Packet and Protocol Analysis Packet Analysis Protocol Analysis Malware Log Review Event Logs Syslog Kiwi Syslog Server Firewall Logs Windows Defender Cisco Check Point Web Application Firewall (WAF) Proxy Intrusion Detection System (IDS)/Intrusion Prevention System (IPS) Sourcefire Snort Zeek HIPS Impact Analysis Organization Impact vs. Localized Impact Immediate Impact vs. Total Impact Security Information and Event Management (SIEM) Review Rule Writing Known-Bad Internet Protocol (IP) Dashboard

Query Writing String Search Script Piping E-mail Analysis E-mail Spoofing Malicious Payload DomainKeys Identified Mail (DKIM) Sender Policy Framework (SPF) Domain-based Message Authentication, Reporting, and Conformance (DMARC) Phishing Spear Phishing Whaling Forwarding Digital Signature E-mail Signature Block Embedded Links Impersonation Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 12 Implementing Configuration Changes to Existing Controls to Improve Security “Do I Know This Already?” Quiz Foundation Topics Permissions Whitelisting and Blacklisting Application Whitelisting and Blacklisting

Input Validation Firewall NextGen Firewalls Host-Based Firewalls Intrusion Prevention System (IPS) Rules Data Loss Prevention (DLP) Endpoint Detection and Response (EDR) Network Access Control (NAC) Quarantine/Remediation Agent-Based vs. Agentless NAC 802.1X Sinkholing Malware Signatures Development/Rule Writing Sandboxing Port Security Limiting MAC Addresses Implementing Sticky MAC Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 13 The Importance of Proactive Threat Hunting “Do I Know This Already?” Quiz Foundation Topics Establishing a Hypothesis Profiling Threat Actors and Activities Threat Hunting Tactics Hunt Teaming

Threat Model Executable Process Analysis Memory Consumption Reducing the Attack Surface Area System Hardening Configuration Lockdown Bundling Critical Assets Commercial Business Classifications Military and Government Classifications Distribution of Critical Assets Attack Vectors Integrated Intelligence Improving Detection Capabilities Continuous Improvement Continuous Monitoring Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 14 Automation Concepts and Technologies “Do I Know This Already?” Quiz Foundation Topics Workflow Orchestration Scripting Application Programming Interface (API) Integration Automated Malware Signature Creation Data Enrichment Threat Feed Combination Machine Learning Use of Automation Protocols and Standards

Security Content Automation Protocol (SCAP) Continuous Integration Continuous Deployment/Delivery Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 15 The Incident Response Process “Do I Know This Already?” Quiz Foundation Topics Communication Plan Limiting Communication to Trusted Parties Disclosing Based on Regulatory/Legislative Requirements Preventing Inadvertent Release of Information Using a Secure Method of Communication Reporting Requirements Response Coordination with Relevant Entities Legal Human Resources Public Relations Internal and External Law Enforcement Senior Leadership Regulatory Bodies Factors Contributing to Data Criticality Personally Identifiable Information (PII) Personal Health Information (PHI) Sensitive Personal Information (SPI) High Value Assets

Financial Information Intellectual Property Patent Trade Secret Trademark Copyright Securing Intellectual Property Corporate Information Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 16 Applying the Appropriate Incident Response Procedure “Do I Know This Already?” Quiz Foundation Topics Preparation Training Testing Documentation of Procedures Detection and Analysis Characteristics Contributing to Severity Level Classification Downtime and Recovery Time Data Integrity Economic System Process Criticality Reverse Engineering Data Correlation

Containment Segmentation Isolation Eradication and Recovery Vulnerability Mitigation Sanitization Reconstruction/Reimaging Secure Disposal Patching Restoration of Permissions Reconstitution of Resources Restoration of Capabilities and Services Verification of Logging/Communication to Security Monitoring Post-Incident Activities Evidence Retention Lessons Learned Report Change Control Process Incident Response Plan Update Incident Summary Report Indicator of Compromise (IoC) Generation Monitoring Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 17 Analyzing Potential Indicators of Compromise “Do I Know This Already?” Quiz Foundation Topics

Network-Related Indicators of Compromise Bandwidth Consumption Beaconing Irregular Peer-to-Peer Communication Rogue Device on the Network Scan/Sweep Unusual Traffic Spike Common Protocol over Non-standard Port Host-Related Indicators of Compromise Processor Consumption Memory Consumption Drive Capacity Consumption Unauthorized Software Malicious Process Unauthorized Change Unauthorized Privilege Data Exfiltration Abnormal OS Process Behavior File System Change or Anomaly Registry Change or Anomaly Unauthorized Scheduled Task Application-Related Indicators of Compromise Anomalous Activity Introduction of New Accounts Unexpected Output Unexpected Outbound Communication Service Interruption Application Log Exam Preparation Tasks Review All Key Topics

Define Key Terms Review Questions Chapter 18 Utilizing Basic Digital Forensics Techniques “Do I Know This Already?” Quiz Foundation Topics Network Wireshark tcpdump Endpoint Disk FTK Helix3 Password Cracking Imaging Memory Mobile Cloud Virtualization Legal Hold Procedures EnCase Forensic Sysinternals Forensic Investigation Suite Hashing Hashing Utilities Changes to Binaries Carving Data Acquisition Exam Preparation Tasks

Review All Key Topics Define Key Terms Review Questions Chapter 19 The Importance of Data Privacy and Protection “Do I Know This Already?” Quiz Foundation Topics Privacy vs. Security Non-technical Controls Classification Ownership Retention Data Types Personally Identifiable Information (PII) Personal Health Information (PHI) Payment Card Information Retention Standards Confidentiality Legal Requirements Data Sovereignty Data Minimization Purpose Limitation Non-disclosure agreement (NDA) Technical Controls Encryption Data Loss Prevention (DLP) Data Masking Deidentification Tokenization

Digital Rights Management (DRM) Document DRM Music DRM Movie DRM Video Game DRM E-Book DRM Watermarking Geographic Access Requirements Access Controls Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 20 Applying Security Concepts in Support of Organizational Risk Mitigation “Do I Know This Already?” Quiz Foundation Topics Business Impact Analysis Identify Critical Processes and Resources Identify Outage Impacts and Estimate Downtime Identify Resource Requirements Identify Recovery Priorities Recoverability Fault Tolerance Risk Identification Process Make Risk Determination Based upon Known Metrics Qualitative Risk Analysis Quantitative Risk Analysis

Risk Calculation Probability Magnitude Communication of Risk Factors Risk Prioritization Security Controls Engineering Tradeoffs MOUs SLAs Organizational Governance Business Process Interruption Degrading Functionality Systems Assessment ISO/IEC 27001 ISO/IEC 27002 Documented Compensating Controls Training and Exercises Red Team Blue Team White Team Tabletop Exercise Supply Chain Assessment Vendor Due Diligence OEM Documentation Hardware Source Authenticity Trusted Foundry Exam Preparation Tasks Review All Key Topics Define Key Terms

Review Questions Chapter 21 The Importance of Frameworks, Policies, Procedures, and Controls “Do I Know This Already?” Quiz Foundation Topics Frameworks Risk-Based Frameworks National Institute of Standards and Technology (NIST) COBIT The Open Group Architecture Framework (TOGAF) Prescriptive Frameworks NIST Cybersecurity Framework Version 1.1 ISO 27000 Series SABSA ITIL Maturity Models ISO/IEC 27001 Policies and Procedures Code of Conduct/Ethics Acceptable Use Policy (AUP) Password Policy Data Ownership Data Retention Account Management Continuous Monitoring Work Product Retention Category

Managerial Operational Technical Control Type Preventative Detective Corrective Deterrent Directive Physical Audits and Assessments Regulatory Compliance Exam Preparation Tasks Review All Key Topics Define Key Terms Review Questions Chapter 22 Final Preparation Exam Information Getting Ready Tools for Final Preparation Pearson Test Prep Practice Test Software and Questions on the Website Memory Tables Chapter-Ending Review Tools Suggested Plan for Final Review/Study Summary Appendix A Answers to the “Do I Know This Already?” Quizzes and Review Questions

Appendix B CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Exam Updates Glossary of Key Terms Index Online Elements: Appendix C Memory Tables Appendix D Memory Tables Answer Key Appendix E Study Planner Glossary of Key Terms

About the Author Troy McMillan is a product developer and technical editor for Kaplan IT as well as a full-time trainer. He became a professional trainer 20 years ago, teaching Cisco, Microsoft, CompTIA, and wireless classes. He has written or contributed to more than a dozen projects, including the following recent ones: Contributing subject matter expert for CCNA Cisco Certified Network Associate Certification Exam Preparation Guide (Kaplan) Author of CISSP Cert Guide (Pearson) Prep test question writer for CCNA Wireless 640-722 Official Cert Guide (Cisco Press) Author of CompTIA Advanced Security Practitioner (CASP) Cert Guide (Pearson)

Troy has also appeared in the following training videos for OnCourse Learning: Security+; Network+; Microsoft 70-410, 411, and 412 exam prep; ICND1; and ICND2. He delivers CISSP training classes for CyberVista, and is an authorized online training provider for (ISC)2. Troy also creates certification practice tests and study guides for CyberVista. He lives in Asheville, North Carolina, with his wife, Heike.

Dedication I dedicate this book to my wife, Heike, who has supported me when I needed it most.

Acknowledgments I must thank everyone on the Pearson team for all of their help in making this book better than it would have been without their help. That includes Chris Cleveland, Nancy Davis, Chris Crayton, Tonya Simpson, and Mudita Sonar.

About the Technical Reviewer Chris Crayton (MCSE) is an author, technical consultant, and trainer. He has worked as a computer technology and networking instructor, information security director, network administrator, network engineer, and PC specialist. Chris has authored several print and online books on PC repair, CompTIA A+, CompTIA Security+, and Microsoft Windows. He has also served as technical editor and content contributor on numerous technical titles for several of the leading publishing companies. He holds numerous industry certifications, has been recognized with many professional teaching awards, and has served as a state-level SkillsUSA competition judge.

We Want to Hear from You! As the reader of this book, you are our most important critic and commentator. We value your opinion and want to know what we’re doing right, what we could do better, what areas you’d like to see us publish in, and any other words of wisdom you’re willing to pass our way. We welcome your comments. You can email to let us know what you did or didn’t like about this book—as well as what we can do to make our books better. Please note that we cannot help you with technical problems related to the topic of this book. When you write, please be sure to include this book’s title and author as well as your name and email address. We will carefully review your comments and share them with the author and editors who worked on the book. Email:

[email protected]

Reader Services Register your copy of CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide at www.pearsonitcertification.com for convenient access to downloads, updates, and corrections as they become available. To start the registration process, go to www.pearsonitcertification.com/register and log in or create an account*. Enter the product ISBN 9780136747161 and click Submit. When the process is complete, you will find any available bonus content under Registered Products.

*Be sure to check the box that you would like to hear from us to receive exclusive discounts on future editions of this product.

Introduction CompTIA CySA+ bridges the skills gap between CompTIA Security+ and CompTIA Advanced Security Practitioner (CASP+). Building on CySA+, IT professionals can pursue CASP+ to prove their mastery of the hands-on cybersecurity skills required at the 5- to 10-year experience level. Earn the CySA+ certification to grow your career within the CompTIA recommended cybersecurity career pathway. CompTIA CySA+ certification is designed to be a “vendorneutral” exam that measures your knowledge of industrystandard technology. GOALS AND METHODS The number-one goal of this book is a simple one: to help you pass the 2020 version of the CompTIA CySA+ certification exam, CS0-002. Because the CompTIA CySA+ certification exam stresses problem-solving abilities and reasoning more than memorization of terms and facts, this book is designed to help you master and understand the required objectives for each exam. To aid you in mastering and understanding the CySA+ certification objectives, this book uses the following methods: The beginning of each chapter identifies the CompTIA CySA+ objective addressed in the chapter and defines the related topics covered in the chapter. The body of the chapter explains the topics from a hands-on and theory-based standpoint. This includes in-depth descriptions, tables, and figures that are geared toward building your knowledge so that you can pass the exam. The structure of each chapter

generally follows the outline of the corresponding exam objective, which not only enables you to study the exam objectives methodically but also enables you to easily locate coverage of specific exam objectives that you think you need to review further. Key Topic icons identify important figures, tables, and lists of information that you should know for the exam. Key topics are interspersed throughout the chapter and are listed in a table at the end of the chapter. Key terms in each chapter are emphasized in bold italic and are listed without definitions at the end of each chapter. Write down the definition of each term and check your work against the complete key terms in the glossary.

WHO SHOULD READ THIS BOOK? The CompTIA CySA+ exam is designed for IT security analysts, vulnerability analysts, and threat intelligence analysts. The exam certifies that a successful candidate has the knowledge and skills required to leverage intelligence and threat detection techniques, analyze and interpret data, identify and address vulnerabilities, suggest preventative measures, and effectively respond to and recover from incidents. The recommended experience for taking the CompTIA CySA+ exam includes Network+, Security+, or equivalent knowledge as well as a minimum of four years of hands-on information security or related experience. This book is for you if you are attempting to attain a position in the cybersecurity field. It is also for you if you want to keep your skills sharp or perhaps retain your job due to a company policy that mandates that you update security skills. This book is also for you if you want to acquire additional certifications beyond Network+ or Security+. The book is designed to offer easy transition to future certification studies. STRATEGIES FOR EXAM PREPARATION

Strategies for exam preparation vary depending on your existing skills, knowledge, and equipment available. Of course, the ideal exam preparation would consist of three or four years of handson security or related experience followed by rigorous study of the exam objectives. Before and after you have read through the book, have a look at the current exam objectives for the CompTIA CySA+ Certification Exam, listed at https://www.comptia.org/certifications/cybersecurityanalyst#examdetails. If there are any areas shown in the certification exam outline that you would still like to study, find those sections in the book and review them. When you feel confident in your skills, attempt the practice exams found on the website that accompanies this book. As you work through the practice exams, note the areas where you lack confidence and review those concepts or configurations in the book. After you have reviewed those areas, work through the practice exams a second time and rate your skills. Keep in mind that the more you work through the practice exams, the more familiar the questions will become. After you have worked through the practice exams a second time and feel confident in your skills, schedule the CompTIA CySA+ CS0-002 exam through Pearson Vue (https://home.pearsonvue.com). To prevent the information from evaporating out of your mind, you should typically take the exam within a week of when you consider yourself ready to take it. The CompTIA CySA+ certification credential for those passing the certification exams is now valid for three years. To renew your certification without retaking the exam, you need to participate in continuing education (CE) activities and pay an annual maintenance fee of $50 (that is, $150 for three years).

See https://www.comptia.org/continuing-education/learn/ceprogram-fees for fee details. To learn more about the certification renewal policy, see https://certification.comptia.org/continuing-education. HOW THE BOOK IS ORGANIZED Table I-1 outlines where each of the CySA+ exam objectives is covered in the book. For a full dissection of what is covered in each objective, you should download the most recent set of objectives from https://www.comptia.org/certifications/cybersecurityanalyst#examdetails. Table I-1 CySA+ CS0-002 Exam Objectives: Coverage by Chapter

Exam Objective

Chapter Where This Objective Is Covered

Domain 1.0 Threat and Vulnerability Management (accounts for 22% of the exam) 1.1 Explain the importance of threat data and intelligence

Cha pter 1

1.2 Given a scenario, utilize threat intelligence to support organizational security

Cha pter 2

1.3 Given a scenario, perform vulnerability management activities

Cha pter 3

1.4 Given a scenario, analyze the output from common

Cha

vulnerability assessment tools

pter 4

1.5 Explain the threats and vulnerabilities associated with specialized technology

Cha pter 5

1.6 Explain the threats and vulnerabilities associated with operating in the cloud

Cha pter 6

1.7 Given a scenario, implement controls to mitigate attacks and software vulnerabilities

Cha pter 7

Domain 2.0 Software and Systems Security (accounts for 18% of the exam) 2.1 Given a scenario, apply security solutions for infrastructure management

Cha pter 8

2.2 Explain software assurance best practices

Cha pter 9

2.3 Explain hardware assurance best practices

Cha pter 10

Domain 3.0 Security Operations and Monitoring (accounts for 25% of the exam) 3.1 Given a scenario, analyze data as part of security monitoring activities

Cha pter 11

3.2 Given a scenario, implement configuration changes to existing controls to improve security

Cha pter

12 3.3 Explain the importance of proactive threat hunting

Cha pter 13

3.4 Compare and contrast automation concepts and technologies

Cha pter 14

Domain 4.0 Incident Response (accounts for 22% of the exam) 4.1 Explain the importance of the incident response process

Cha pter 15

4.2 Given a scenario, apply the appropriate incident response procedure

Cha pter 16

4.3 Given an incident, analyze potential indicators of compromise

Cha pter 17

4.4 Given a scenario, utilize basic digital forensics techniques

Cha pter 18

Domain 5.0 Compliance and Assessment (accounts for 13% of the exam) 5.1 Understand the importance of data privacy and protection

Cha pter 19

5.2 Given a scenario, apply security concepts in support of organizational risk mitigation

Cha pter 20

5.3 Explain the importance of frameworks, policies, procedures, and controls

Cha pter 21

BOOK FEATURES To help you customize your study time using this book, the core chapters have several features that help you make the best use of your time: Foundation Topics: These are the core sections of each chapter. They explain the concepts for the topics in that chapter. Exam Preparation Tasks: After the “Foundation Topics” section of each chapter, the “Exam Preparation Tasks” section provides the following study activities that you should do to prepare for the exam: Review All Key Topics: As previously mentioned, the Key Topic icon appears next to the most important items in the “Foundation Topics” section of the chapter. The Review All Key Topics activity lists the key topics from the chapter, along with their page numbers. Although the contents of the entire chapter could be on the exam, you should definitely know the information listed in each key topic, so you should review these. Define Key Terms: Although the CySA+ exam might be unlikely to ask a question such as “Define this term,” the exam does require that you learn and know a lot of cybersecurityrelated terminology. This section lists the most important terms from the chapter, asking you to write a short definition of each and compare your answer to the glossary entry at the end of the book. Review Questions: Confirm that you understand the content that you just covered by answering these questions and reading the answer explanations. Web-based practice exam: The companion website includes the Pearson Test Prep practice test software that enables you to take

practice exam questions. Use it to prepare with a sample exam and to pinpoint topics where you need more study.

WHAT’S NEW? With every exam update, changes in the relative emphasis on certain topics can change. Here is an overview of some of the most important changes: Increased content on the importance of threat data and intelligence Increased emphasis on regulatory compliance Increased emphasis on the options and combinations available for any given command Increased emphasis on identifying attacks through log analysis Increased coverage of cloud security Increased coverage of forming and using queries

THE COMPANION WEBSITE FOR ONLINE CONTENT REVIEW All the electronic review elements, as well as other electronic components of the book, exist on this book’s companion website. To access the companion website, which gives you access to the electronic content with this book, start by establishing a login at www.pearsonITcertification.com and register your book. To do so, simply go to www.pearsonitcertification.com/register and enter the ISBN of the print book: 9780136747161. After you have registered your book, go to your account page and click the Registered Products tab. From there, click the Access Bonus Content link to get access to the book’s companion website. Note that if you buy the Premium Edition eBook and Practice Test version of this book from Pearson, your book will

automatically be registered on your account page. Simply go to your account page, click the Registered Products tab, and select Access Bonus Content to access the book’s companion website. Please note that many of our companion content files can be very large, especially image and video files. If you are unable to locate the files for this title by following the steps at left, please visit www.pearsonITcertification.com/contact and select the Site Problems/Comments option. Our customer service representatives will assist you. HOW TO ACCESS THE PEARSON TEST PREP PRACTICE TEST SOFTWARE You have two options for installing and using the Pearson Test Prep practice test software: a web app and a desktop app. To use the Pearson Test Prep application, start by finding the registration code that comes with the book. You can find the code in these ways: Print book: Look in the cardboard sleeve in the back of the book for a piece of paper with your book’s unique PTP code. Premium Edition: If you purchase the Premium Edition eBook and Practice Test directly from the www.pearsonITcertification.com website, the code will be populated on your account page after purchase. Just log in to www.pearsonITcertification.com, click Account to see details of your account, and click the Digital Purchases tab. Amazon Kindle: For those who purchase a Kindle edition from Amazon, the access code will be supplied directly from Amazon. Other bookseller e-books: Note that if you purchase an e-book version from any other source, the practice test is not included because other vendors to date have not chosen to vend the required unique access code.

Note Do not lose the activation code because it is the only means with which you can access the QA content with the book.

Once you have the access code, to find instructions about both the PTP web app and the desktop app, follow these steps: Step 1. Open this book’s companion website. Step 2. Click the Practice Exams button. Step 3. Follow the instructions listed there both for installing the desktop app and for using the web app. Note that if you want to use the web app only at this point, just navigate to www.pearsontestprep.com, establish a free login if you do not already have one, and register this book’s practice tests using the registration code you just found. The process should take only a couple of minutes. Note Amazon eBook (Kindle) customers: It is easy to miss Amazon’s e-mail that lists your PTP access code. Soon after you purchase the Kindle eBook, Amazon should send an e-mail. However, the e-mail uses very generic text, and makes no specific mention of PTP or practice exams. To find your code, read every email from Amazon after you purchase the book. Also do the usual checks for ensuring your e-mail arrives, like checking your spam folder.

Note Other eBook customers: As of the time of publication, only the publisher and Amazon supply PTP access codes when you purchase their eBook editions of this book.

CUSTOMIZING YOUR EXAMS Once you are in the exam settings screen, you can choose to take exams in one of three modes: Study mode: Enables you to fully customize your exams and review answers as you are taking the exam. This is typically the

mode you would use first to assess your knowledge and identify information gaps. Practice Exam mode: Locks certain customization options, as it is presenting a realistic exam experience. Use this mode when you are preparing to test your exam readiness. Flash Card mode: Strips out the answers and presents you with only the question stem. This mode is great for late-stage preparation when you really want to challenge yourself to provide answers without the benefit of seeing multiple-choice options. This mode does not provide the detailed score reports that the other two modes do, so you should not use it if you are trying to identify knowledge gaps.

In addition to these three modes, you will be able to select the source of your questions. You can choose to take exams that cover all of the chapters or you can narrow your selection to just a single chapter or the chapters that make up specific parts in the book. All chapters are selected by default. If you want to narrow your focus to individual chapters, simply deselect all the chapters and then select only those on which you wish to focus in the Objectives area. You can also select the exam banks on which to focus. Each exam bank comes complete with a full exam of questions that cover topics in every chapter. You can have the test engine serve up exams from all test banks or just from one individual bank by selecting the desired banks in the exam bank area. There are several other customizations you can make to your exam from the exam settings screen, such as the time of the exam, the number of questions served up, whether to randomize questions and answers, whether to show the number of correct answers for multiple-answer questions, and whether to serve up only specific types of questions. You can also create custom test banks by selecting only questions that you have marked or questions on which you have added notes.

Updating Your Exams If you are using the online version of the Pearson Test Prep software, you should always have access to the latest version of the software as well as the exam data. If you are using the Windows desktop version, every time you launch the software while connected to the Internet, it checks if there are any updates to your exam data and automatically downloads any changes that were made since the last time you used the software. Sometimes, due to many factors, the exam data might not fully download when you activate your exam. If you find that figures or exhibits are missing, you might need to manually update your exams. To update a particular exam you have already activated and downloaded, simply click the Tools tab and click the Update Products button. Again, this is only an issue with the desktop Windows application. If you wish to check for updates to the Pearson Test Prep exam engine software, Windows desktop version, simply click the Tools tab and click the Update Application button. This ensures that you are running the latest version of the software engine.

Credits Cover image: New Africa/Shutterstock Chapter opener image: Charlie Edwards/Photodisc/Getty Images Figure 3-1 © Greenbone Networks GmbH Figure 3-2 © Greenbone Networks GmbH Figure 3-3 © 2020 Tenable, Inc Figure 3-4 © 2020 Tenable, Inc Figure 3-5 © 2020 Tenable, Inc Figure 4-1 © Sarosys LLC 2010-2017 Figure 4-4 © Greenbone Networks GmbH Figure 4-5 © Greenbone Networks GmbH Quote, “the process of analyzing a subject system to identify the system’s components and their interrelationships, and to create representations of the system in another form or at a higher level of abstraction” © Institute of Electrical and Electronics Engineers (IEEE) Figure 4-7 © Insecure.Com LLC Figure 4-8 © Insecure.Com LLC Figure 4-9 © Insecure.Com LLC Figure 4-10 © Insecure.Com LLC

Figure 4-12 © 2020 KSEC Figure 4-13 © 2009-2020 Aircrack-ng Figure 4-14 © hashcat Figure 4-15 © 2020 HACKING LAND Figure 5-5 © U.S. Department of Health and Human Services Figure 11-1 © 2020 Zoho Corp Figure 11-5 © Microsoft 2020 Figure 11-8 © 2020 SolarWinds Worldwide, LLC Figure 11-9 © Microsoft 2020 Figure 11-10 © 2020 SolarWinds Worldwide, LLC Figure 11-11 © Microsoft 2020 Figure 11-13 © 2020 Cloudflare, Inc Figure 11-14 © Microsoft 2020 Figure 11-15 © 2004-2018 Zentyal S.L. Figure 11-17 © 1992-2020 Cisco Figure 11-18 © 1992-2020 Cisco Figure 11-19 © 2020 Apple Inc Figure 11-20 © 2020 AT&T CYBERSECURITY Figure 11-21 © 2005-2020 Splunk Inc.

Figure 13-3 © Microsoft 2020 Figure 13-4 © Microsoft 2020 Figure 17-1 © 2004-2020 Rob Dawson Figure 17-4 © Microsoft 2020 Figure 17-5 © Microsoft 2020 Figure 18-1 © wireshark Figure 18-2 © wireshark Figure 18-3 © wireshark Figure 18-4 © 2001-2014 Massimiliano Montoro Figure 18-7 © Microsoft 2020 Figure 19-1 courtesy of Wikipedia

Chapter 1

The Importance of Threat Data and Intelligence This chapter covers the following topics related to Objective 1.1 (Explain the importance of threat data and intelligence) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Intelligence sources: Examines open-source intelligence, proprietary/closed-source intelligence, timeliness, relevancy, and accuracy. Confidence levels: Covers the importance of identifying levels of confidence in data. Indicator management: Introduces Structured Threat Information eXpression (STIX), Trusted Automated eXchange of Indicator Information (TAXII), and OpenIOC. Threat classification: Investigates known threats vs. unknown threats, zero-day threats, and advanced persistent threats. Threat actors: Identifies actors such as nation-state, hacktivist, organized crime, and intentional and unintentional insider threats. Intelligence cycle: Explains the requirements, collection, analysis, dissemination, and feedback stages. Commodity malware: Describes the types of malware that commonly infect networks. Information sharing and analysis communities: Discusses data sharing among members of healthcare, financial, aviation, government, and critical infrastructure communities.

When a war is fought, the gathering and processing of intelligence information is critical to the success of a campaign.

Likewise, when conducting the daily war that comprises the defense of an enterprise’s security, threat intelligence can be the difference between success and failure. This opening chapter discusses the types of threat intelligence, the sources and characteristics of such data, and common threat classification systems. This chapter also discusses the threat cycle, common malware, and systems of information sharing among enterprises.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these seven self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 1-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 1-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Intelligence Sources

1

Indicator Management

2

Threat Classification

3

Threat Actors

4

Intelligence Cycle

5

Commodity Malware

6

Information Sharing and Analysis Communities

7

Caution The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the selfassessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which of the following is an example of closed-source intelligence? 1. Internet blogs and discussion groups 2. Print and online media 3. Unclassified government data 4. Platforms maintained by private organizations

2. Which of the following is an application protocol for exchanging cyber threat information over HTTPS? 1. TAXII 2. STIX 3. OpenIOC 4. OSINT

3. Which of the following are threats discovered in live environments that have no current fix or patch? 1. Known threats 2. Zero-day threats 3. Unknown threats 4. Advanced persistent threats

4. Which of the following threat actors uses attacks as a means to get their message out and affect the businesses that they feel are detrimental to their cause? 1. Organized crime 2. Terrorist group 3. Hacktivist 4. Insider threat

5. In which stage of the intelligence cycle does most of the hard work occur? 1. Requirements 2. Collection 3. Dissemination 4. Analysis

6. Malware that is widely available for either purchase or by free download is called what? 1. Advanced 2. Commodity 3. Bulk 4. Proprietary

7. Which of the following information sharing and analysis communities is driven by the requirements of HIPAA? 1. H-ISAC 2. Financial Services Information Sharing and Analysis Center 3. Aviation Government Coordinating Council 4. ENISA

FOUNDATION TOPICS

INTELLIGENCE SOURCES Threat intelligence comes in many forms and can be obtained from a number of different sources. When gathering this critical data, the security professional should always classify the information with respect to its timeliness and relevancy. Let’s look at some types of threat intelligence and the process of attaching a confidence level to the data.

Open-Source Intelligence Open-source intelligence (OSINT) consists of information that is publicly available to everyone, though not everyone knows that it is available. OSINT comes from public search engines, social media sites, newspapers, magazine articles, or any source that does not limit access to that information. Examples of these sources include the following: Print and online media Internet blogs and discussion groups Unclassified government data Academic and professional publications Industry group data Papers and reports that are unpublished (gray data)

Proprietary/Closed-Source Intelligence Proprietary/closed-source intelligence sources are those that are not publicly available and usually require a fee to access. Examples of these sources are platforms maintained by private organizations that supply constantly updating intelligence information. In many cases this data is developed from all of the provider’s customers and other sources.

An example of such a platform is offered by CYFIRMA, a market leader in predictive cyber threat visibility and intelligence. CYFIRMA announced the launch of cloud-based Cyber Intelligence Analytics Platform (CAP) v2.0. In 2019, using its proprietary artificial intelligence and machine learning algorithms, CYFIRMA helped organizations unravel cyber risks and threats and enable proactive cyber posture management. Timeliness One of the considerations when analyzing intelligence data (of any kind, not just cyber data) is the timeliness of such data. Obviously, if an organization receives threat data that is two weeks old, quite likely it is too late to avoid that threat. One of the attractions of closed-source intelligence is that these platforms typically provide near real-time alerts concerning such threats. Relevancy Intelligence data can be quite voluminous. The vast majority of this information is irrelevant to any specific organization. One of the jobs of the security professional is to ascertain which data is relevant and which is not. Again, many proprietary platforms allow for searching and organizing the data to enhance its relevancy. Confidence Levels While timeliness and relevancy are key characterizes to evaluate with respect to intelligence, the security professional must also make an assessment as to the confidence level attached to the data. That is, can it be relied on to predict the future or to shed light on the past? On a more basic level, is it true? Or was the data developed to deceive or mislead? Many cyber activities have as their aim to confuse, deceive, and hide activities. Accuracy

Finally, the security professional must determine whether the intelligence is correct (accuracy). Newspapers are full these days of cases of false intelligence. The most basic example of this is the hoax email containing a false warning of a malware infection on the local device. Although the email is false, in many cases it motivates the user to follow a link to free software that actually installs malware. Again, many cyber attacks use false information to misdirect network defenses.

INDICATOR MANAGEMENT Cybersecurity professionals use indicators of compromise (IOC) to identify potential threats. IOCs are network events that are known to either precede or accompany an attack of some sort. Managing the collection and analysis of these indicators can be a major headache. Indicator management systems have been developed to make this process somewhat easier. These systems also provide insight into indicators present in other networks that may not yet be present in your enterprise, providing somewhat of an early-warning system. Let’s look at some examples of these platforms. Structured Threat Information eXpression (STIX) Structured Threat Information eXpression (STIX) is an XML-based programming language that can be used to communicate cybersecurity data among those using the language. It provides a common language for this communication. STIX was created with several core purposes in mind: To identify patterns that could indicate cyber threats To help facilitate cyber threat response activities, including prevention, detection, and response

The sharing of cyber threat information within an organization and with outside partners or communities that benefit from the information

While STIX was originally sponsored by the Office of Cybersecurity and Communications (CS&C) within the U.S. Department of Homeland Security (DHS), it is now under the management of the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit consortium that seeks to advance the development, convergence, and adoption of open standards for the Internet. Trusted Automated eXchange of Indicator Information (TAXII) Trusted Automated eXchange of Indicator Information (TAXII) is an application protocol for exchanging cyber threat information (CTI) over HTTPS. It defines two primary services, Collections and Channels. Figure 1-1 shows the Collection service. A Collection is an interface to a logical repository of CTI objects provided by a TAXII Server that allows a producer to host a set of CTI data that can be requested by consumers: TAXII Clients and Servers exchange information in a request-response model.

Figure 1-1 Collection Service Figure 1-2 shows a Channel service. Maintained by a TAXII Server, a Channel allows producers to push data to many

consumers and allows consumers to receive data from many producers: TAXII Clients exchange information with other TAXII Clients in a publish-subscribe model.

FIGURE 1-2 Channel Service These TAXII services can support a variety of common sharing models: Hub and spoke: One central clearinghouse Source/subscriber: One organization is the single source of information Peer-to-peer: Multiple organizations share their information

OpenIOC OpenIOC (Open Indicators of Compromise) is an open framework designed for sharing threat intelligence information in a machine-readable format. It is a simple framework that is written in XML, which can be used to document and classify forensic artifacts. It comes with a base set of 500 predefined indicators, as provided by Mandiant (a U.S. cybersecurity firm later acquired by FireEye).

THREAT CLASSIFICATION After threat data has been collected though a vulnerability scan or through an alert, it must be correlated to an attack type and classified as to its severity and scope, based on how widespread the incident appears to be and the types of data that have been put at risk. This helps in the prioritization process. Much as in the triage process in a hospital, incidents are not handled in the order in which they are received or detected; rather, the most dangerous issues are addressed first, and prioritization occurs constantly. When determining vulnerabilities and threats to an asset, considering the threat actors first is often easiest. Threat actors can be grouped into the following six categories: Human: Includes both malicious and nonmalicious insiders and outsiders, terrorists, spies, and terminated personnel Natural: Includes floods, fires, tornadoes, hurricanes, earthquakes, and other natural disasters or weather events Technical: Includes hardware and software failure, malicious code, and new technologies Physical: Includes CCTV issues, perimeter measures failure, and biometric failure Environmental: Includes power and other utility failure, traffic issues, biological warfare, and hazardous material issues (such as spillage) Operational: Includes any process or procedure that can affect confidentiality, integrity, and availability (CIA)

When the vulnerabilities and threats have been identified, the loss potential for each must be determined. This loss potential is determined by using the likelihood of the event combined with the impact that such an event would cause. An event with a high likelihood and a high impact would be given more importance than an event with a low likelihood and a low impact. Different

types of risk analysis should be used to ensure that the data that is obtained is maximized. Once an incident has been placed into one of these classifications, options that are available for that classification are considered. The following sections look at three common classifications that are used. Known Threat vs. Unknown Threat In the cybersecurity field, known threats are threats that are common knowledge and easily identified through signatures by antivirus and intrusion detection system (IDS) engines or through domain reputation blacklists. Unknown threats, on the other hand, are lurking threats that may have been identified but for which no signatures are available. We are not completely powerless against these threats. Many security products attempt to locate these threats through static and dynamic file analysis. This may occur in a sandboxed environment, which protects the system that is performing the analysis. In some cases, unknown threats are really old threats that have been recycled. Because security products have limited memory with regard to threat signatures, vendors must choose the most current attack signatures to include. Therefore, old attack signatures may be missing in newer products, which effectively allows old known threats to reenter the unknown category. Zero-day In many cases, vulnerabilities discovered in live environments have no current fix or patch. Such a vulnerability is referred to as zero-day vulnerability. The best way to prevent zero-day attacks is to write bug-free applications by implementing efficient designing, coding, and testing practices. Having staff discover zero-day vulnerabilities is much better than having those looking to exploit the vulnerabilities find them. Monitoring known hacking community websites can often help

you detect attacks early because hackers often share zero-day exploit information. Honeypots or honeynets can also provide forensic information about hacker methods and tools for zero-day attacks. New zeroday attacks against a broad range of technology systems are announced on a regular basis. A security manager should create an inventory of applications and maintain a list of critical systems to manage the risks of these attack vectors. Because zero-day attacks occur before a fix or patch has been released, preventing them is difficult. As with many other attacks, keeping all software and firmware up to date with the latest updates and patches is important. Enabling audit logging of network traffic can help reconstruct the path of a zero-day attack. Security professionals can inspect logs to determine the presence of an attack in the network, estimate the damage, and identify corrective actions. Zero-day attacks usually involve activity that is outside “normal” activity, so documenting normal activity baselines is important. Also, routing traffic through a central internal security service can ensure that any fixes affect all the traffic in the most effective manner. Whitelisting can also aid in mitigating attacks by ensuring that only approved entities are able to use certain applications or complete certain tasks. Finally, security professionals should ensure that the organization implements the appropriate backup schemes to ensure that recovery can be achieved, thereby providing remediation from the attack. Advanced Persistent Threat An advanced persistent threat (APT) is a hacking process that targets a specific entity and is carried out over a long period of time. In most cases, the victim of an APT is a large corporation or government entity. The attacker is usually an organized, well-funded group of highly skilled individuals, sometimes sponsored by a nation-state. The attackers have a

predefined objective. Once the objective is met, the attack is halted. APTs can often be detected by monitoring logs and performance metrics. While no defensive actions are 100% effective, the following actions may help mitigate many APTs: Use application whitelisting to help prevent malicious software and unapproved programs from running. Patch applications such as Java, PDF viewers, Flash, web browsers, and Microsoft Office products. Patch operating system vulnerabilities. Restrict administrative privileges to operating systems and applications, based on user duties.

THREAT ACTORS A threat is carried out by a threat actor. For example, an attacker who takes advantage of an inappropriate or absent access control list (ACL) is a threat actor. Keep in mind, though, that threat actors can discover and/or exploit vulnerabilities. Not all threat actors will actually exploit an identified vulnerability. The Federal Bureau of Investigation (FBI) has identified three categories of threat actors: nations-state or state sponsors, organized crime, and terrorist groups. Nation-state Nation-state or state sponsors are usually foreign governments. They are interested in pilfering data, including intellectual property and research and development data, from major manufacturers, tech companies, government agencies, and defense contractors. They have the most resources and are the best organized of any of the threat actor groups. Organized Crime

Organized crime groups primarily threaten the financial services sector and are expanding the scope of their attacks. They are well financed and organized. Terrorist Groups Terrorist groups want to impact countries by using the Internet and other networks to disrupt or harm the viability of a society by damaging its critical infrastructure. Hacktivist While not mentioned by the FBI, hacktivists are activists for a cause, such as animal rights, that use hacking as a means to get their message out and affect the businesses that they feel are detrimental to their cause. Insider Threat Insider threats should be one of the biggest concerns for security personnel. Insiders have knowledge of and access to systems that outsiders do not have, giving insiders a much easier avenue for carrying out or participating in an attack. An organization should implement the appropriate event collection and log review policies to provide the means to detect insider threats as they occur. These threats fall into two categories, intentional and unintentional. Intentional Intentional insider threats are insiders who have ill intent. These folks typically either are disgruntled over some perceived slight or are working for another organization to perform corporate espionage. They may share sensitive documents with others or they may impart knowledge used to breach a network. This is one of the reasons that users’ permissions and rights must not exceed those necessary to perform their jobs. This helps to limit the damage an insider might inflict.

Unintentional Sometimes internal users unknowingly increase the likelihood that security breaches will occur. Such unintentional insider threats do not have malicious intent; they simply do not understand how system changes can affect security. Security awareness and training should include coverage of examples of misconfigurations that can result in security breaches occurring and/or not being detected. For example, a user may temporarily disable antivirus software to perform an administrative task. If the user fails to reenable the antivirus software, he unknowingly leaves the system open to viruses. In such a case, an organization should consider implementing group policies or some other mechanism to periodically ensure that antivirus software is enabled and running. Another solution could be to configure antivirus software to automatically restart after a certain amount of time. Recording and reviewing user actions via system, audit, and security logs can help security professionals identify misconfigurations so that the appropriate policies and controls can be implemented.

INTELLIGENCE CYCLE Intelligence activities of any sort, including cyber intelligence functions, should follow a logical process developed over years by those in the business. The intelligence cycle model specified in exam objective 1.1 contains five stages: 1. Requirements: Before beginning intelligence activities, security professionals must identify what the immediate issue is and define as closely as possible the requirements of the information that needs to be collected and analyzed. This means the types of data to be sought are driven by the types of issues with which we are concerned. The amount of potential information may be so vast

that unless we filter it to what is relevant, we may be unable to fully understand what is occurring in the environment. 2. Collection: This is the stage in which most of the hard work occurs. It is also the stage at which recent advances in artificial intelligence (AI) and automation have changed the game. Collection is time-consuming work that involves web searches, interviews, identifying sources, and monitoring, to name a few activities. New tools automate data searching, organizing, and presenting information in easy-to-view dashboards. 3. Analysis: In this stage, data is combed and analyzed to identify pieces of information that have the following characteristics: 1. Timely: Can be tied to the issue from a time standpoint 2. Actionable: Suggests or leads to a proper mitigation 3. Consistent: Reduces uncertainty surrounding an issue This is the stage in which the skills of the security professional have the most impact, because the ability to correlate data with issues requires keen understanding of vulnerabilities, their symptoms, and solutions. 4. Dissemination: Hopefully analysis leads to a solution or set of solutions designed to prevent issues. These solutions, be they policies, scripts, or configuration changes, must be communicated to the proper personnel for deployment. The security professional acts as the designer and the network team acts as the builder of the solution. In the case of policy changes, the human resources (HR) team acts as the builder. 5. Feedback: Gathering feedback on the intelligence cycle before the next cycle begins is important so that improvements can be defined. What went right? What worked? What didn’t? Was the analysis stage performed correctly? Was the dissemination process clear and timely? Improvements can almost always be identified.

COMMODITY MALWARE Commodity malware is malware that is widely available either for purchase or by free download. It is not customized or tailored to a specific attack. It does not require complete understanding of its processes and is used by a wide range of

threat actors with a range of skill levels. Although no clear dividing line exists between commodity malware and what is called advanced malware (and in fact the lines are blurring more all the time), generally we can make a distinction based on the skill level and motives of the threat actors who use the malware. Less-skilled threat actors (script kiddies, etc.) utilize these prepackaged commodity tools, whereas more-skilled threat actors (APTs, etc.) typically customize their attack tools to make them more effective in a specific environment. The motives of those who employ commodity malware tend to be gaining experience in hacking and experimentation.

INFORMATION SHARING AND ANALYSIS COMMUNITIES Over time, security professionals have developed methods and platforms for sharing the cybersecurity information they have developed. Some information sharing and analysis communities focus on specific industries while others simply focus on critical issues common to all: Healthcare: In the healthcare community, where protection of patient data is legally required by the Health Insurance Portability and Accountability Act (HIPAA), an example of a sharing platform is the Health Information Sharing and Analysis Center (H-ISAC). It is a global operation focused on sharing timely, actionable, and relevant information among its members, including intelligence on threats, incidents, and vulnerabilities. This sharing of information can be done on a human-to-human or machine-to-machine basis. Financial: The financial services sector is under pressure to protect financial records with laws such as the Financial Services Modernization Act of 1999, commonly known as the Gramm-LeachBliley Act (GLBA). The Financial Services Information Sharing and Analysis Center (FS-ISAC) is an industry consortium dedicated to reducing cyber risk in the global financial system. It shares among its members and trusted sources critical cyber intelligence, and builds awareness through summits, meetings, webinars, and communities of interest.

Aviation: In the area of aviation, the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) maintains a number of chartered organizations, among them the Aviation Government Coordinating Council (AGCC). Its charter document reads “The AGCC coordinates strategies, activities, policy and communications across government entities within the Aviation Sub-Sector. The AGCC acts as the government counterpart to the private industry-led ‘Aviation Sector Coordinating Council’ (ASCC).” The Aviation Sector Coordinating Council is an example of a private sector counterpart. Government: For government agencies, the aforementioned CISA also shares information with state, local, tribal, and territorial governments and with international partners, as cybersecurity threat actors are not constrained by geographic boundaries. As CISA describes itself on the Department of Homeland Security website, “CISA is the Nation’s risk advisor, working with partners to defend against today’s threats and collaborating to build more secure and resilient infrastructure for the future.” Critical infrastructure: All of the previously mentioned platforms and organizations are dedicated to helping organizations protect their critical infrastructure. As an example of international cooperation, the European Union Agency for Network and Information Security (ENISA) is a center of network and information security expertise for the European Union (EU). ENISA describes itself as follows: “ENISA works with these groups to develop advice and recommendations on good practice in information security. It assists member states in implementing relevant EU legislation and works to improve the resilience of Europe’s critical information infrastructure and networks. ENISA seeks to enhance existing expertise in member states by supporting the development of cross-border communities committed to improving network and information security throughout the EU.” More information about ENISA and its work can be found at https://www.enisa.europa.eu.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the

exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 1-2 lists a reference of these key topics and the page number on which each is found.

Table 1-2 Key Topics in Chapter 1

Key Topic Element

Description

Page Number

Section

Open-source intelligence

6

Section

Closed-source intelligence

6

Section

Indicator management platforms

7

Section

Threat actors

12

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: open-source intelligence proprietary/closed-source intelligence timeliness relevancy

confidence levels accuracy indicator management Structured Threat Information eXpression (STIX) Trusted Automated eXchange of Indicator Information (TAXII) OpenIOC known threats unknown threats zero-day threats advanced persistent threat collection analysis dissemination commodity malware

REVIEW QUESTIONS 1. Give at least two examples of open-source intelligence data. 2. ________________ is an open framework that is designed for sharing threat intelligence information in a machine-readable format. 3. Match the following items with the correct definition.

Items

Definitions

OpenIOC

An XML-based programming language that can be used to communicate cybersecurity data among those using the language.

STIX

Uses its proprietary artificial intelligence and machine learning algorithms to help organizations to unravel cyber risks and threats and enables proactive cyber posture management.

Cyber Intelligenc e Analytics Platform (CAP) v2.0

An open framework that is designed for sharing threat intelligence information in a machinereadable format.

4. Which threat actor has already performed network penetration? 5. List the common sharing models used in TAXII. 6. ________________ are hacking for a cause, such as for animal rights, and use hacking as a means to get their message out and affect the businesses that they feel are detrimental to their cause. 7. Match the following items with their definition.

Items

Definitions

Zeroday

Threat carried out over a long period of time

APT

Threat with no known solution

Terrori st

Hacks not for monetary gain but simply to destroy or deface

8. APT attacks are typically sourced from which group of threat actors? 9. What intelligence gathering step is necessary because the amount of potential information may be so vast? 10. The Aviation Government Coordinating Council is chartered by which organization?

Chapter 2

Utilizing Threat Intelligence to Support Organizational Security This chapter covers the following topics related to Objective 1.2 (Given a scenario, utilize threat intelligence to support organizational security) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Attack frameworks: Introduces the MITRE ATT&CK framework, the Diamond Model of Intrusion Analysis, and the kill chain. Threat research: Covers reputational and behavioral research, indicators of compromise (IoC), and the Common Vulnerability Scoring System (CVSS). Threat modeling methodologies: Discusses the concepts of adversary capability, total attack surface, attack vector, impact, and likelihood. Threat intelligence sharing with supported functions: Describes intelligence sharing with the functions incident response, vulnerability management, risk management, security engineering, and detection and monitoring.

Threat intelligence comprises information gathered that does one of the following things: Educates and warns you about potential dangers not yet seen in the environment Identifies behavior that accompanies malicious activity Alerts you of ongoing malicious activity

However, possessing threat intelligence is of no use if it is not converted into concrete activity that responds to and mitigates issues. This chapter discusses how to utilize threat intelligence to support organizational security.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these four self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 2-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 2-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questio n

Attack Frameworks

1

Threat Research

2

Threat Modeling Methodologies

3

Threat Intelligence Sharing with Supported Functions

4

1. Which of the following is a knowledge base of adversary tactics and techniques based on real-world observations?

1. Diamond Model 2. OWASP 3. MITRE ATT&CK 4. STIX

2. Which of the following threat intelligence data types is generated from past activities? 1. Reputational 2. Behavioral 3. Heuristics 4. Anticipatory

3. Your team has identified that a recent breach was sourced by a disgruntled employee. What part of threat modeling is being performed by such identification? 1. Total attack surface 2. Impact 3. Adversary capability 4. Attack vector

4. Which of the following functions uses shared threat intelligence data to build in security for new products and solutions? 1. Incident response 2. Security engineering 3. Vulnerability management 4. Risk management

FOUNDATION TOPICS ATTACK FRAMEWORKS

Many organizations have developed security management frameworks and methodologies to help guide security professionals. These attack frameworks and methodologies include security program development standards, enterprise and security architecture development frameworks, security control development methods, corporate governance methods, and process management methods. The following sections discuss major frameworks and methodologies and explain where they are used. MITRE ATT&CK MITRE ATT&CK is a knowledge base of adversary tactics and techniques based on real-world observations. It is an open system, and attack matrices based on it have been created for various industries. It is designed as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community. An example of such a matrix is the SaaS Matrix created for organizations utilizing Software as a Service (SaaS), shown in Table 2-2. The corresponding matrix on the MITRE ATT&CK website is interactive (https://attack.mitre.org/matrices/enterprise/cloud/saas/), and when you click the name of an attack technique in a cell, a new page opens with a detailed explanation of that attack technique. For more information about the MITRE ATT&CK Matrix for Enterprise and to view the matrices it provides for other platforms (Windows, macOS, etc.), see https://attack.mitre.org/matrices/enterprise/. Table 2-2 ATT&CK Matrix for SaaS

Initi al

Per sist

Pri vile

Defe nse

Crede ntial

Disc over

Later al

Collecti on

acce ss

enc e

ge Esc alat ion

Evasi on

Access

D ri ve b y C o m pr o m is e

R e d u n d a n t A c c e s s

V a li d A c c o u n t s

Ap pli cat ion Ac ces s To ke n

Brut e Forc e

S p ea r P hi sh in g Li n k

V a l i d

Re du nd ant Ac ces s

Stea l App licat ion Acce ss Tok en

Int er nal Sp ear Ph ish ing

Va lid Ac co un ts

Stea l Web Sess ion

W eb Se ssi on Co

T ru st e d R

A c c o u n t s

y

Mov eme nt

Cl ou d Se rvi ce Di sc ov er y

Ap pli cat ion Ac ces s To ke n

Data from Infor matio n Repos itories

el at io n sh ip V al id A cc o u nt s

Coo kie

oki e

W eb Se ssi on Co oki e

The Diamond Model of Intrusion Analysis The Diamond Model of Intrusion Analysis emphasizes the relationships and characteristics of four basic components: the adversary, capabilities, infrastructure, and victims. The main axiom of this model states, “For every intrusion event there exists an adversary taking a step towards an intended goal by using a capability over infrastructure against a victim to produce a result.” Figure 2-1 shows a depiction of the Diamond Model.

Figure 2-1 Diamond Model The corners of the Diamond Model are defined as follows: Adversary: The intent of the attack Capability: Attacker intrusion tools and techniques Infrastructure: The set of systems an attacker uses to launch attacks Victim: A single victim or multiple victims

To access the Diamond Model document see https://www.activeresponse.org/wpcontent/uploads/2013/07/diamond.pdf. Kill Chain The cyber kill chain is a cyber intrusion identification and prevention model developed by Lockheed Martin that describes the stages of an intrusion. It includes seven steps, as described in Figure 2-2. For more information, see

https://www.lockheedmartin.com/enus/capabilities/cyber/cyber-kill-chain.html.

Figure 2-2 Kill Chain

THREAT RESEARCH As a security professional, sometimes just keeping up with your day-to-day workload can be exhausting. But performing ongoing research as part of your regular duties is more important in today’s world than ever before. You should work with your organization and direct supervisor to ensure that you either obtain formal security training on a regular basis or are given adequate time to maintain and increase your security knowledge. You should research the current best security practices, any new security technologies that are coming to market, any new security systems and services that have launched, and how security technology has evolved recently. Threat intelligence is a process that is used to inform decisions regarding responses to any menace or hazard presented by the

latest attack vectors and actors emerging on the security horizon. Threat intelligence analyzes evidence-based knowledge, including context, mechanisms, indicators, implications, and actionable advice, about an existing or emerging menace or hazard to assets. Performing threat intelligence requires generating a certain amount of raw material for the process. This information includes data on the latest attacks, knowledge of current vulnerabilities and threats, specifications on the latest zero-day mitigation controls and remediation techniques, and descriptions of the latest threat models. Let’s look at some issues important to threat research. Reputational Some threat intelligence data is generated from past activities. Reputational scores may be generated for traffic sourced from certain IP ranges, domain names, and URLs. An example of a system that uses such reputational scores is the Cisco Talos IP and Domain Reputation Center. Customers who are participants in the system enjoy the access to data from all customers. As malicious traffic is received by customers, reputational scores are developed for IP ranges, domain names, and URLs that serve as sources of the traffic. Based on these scores, traffic may be blocked from those sources on the customer networks. Behavioral Some threat intelligence data is based not on reputation but on the behavior of the traffic in question. For example, when the source in question is repeatedly sending large amounts of traffic to a single IP address, it indicates a potential DoS attack. Behavioral analysis is also known as anomaly analysis, because it also observes network behaviors for anomalies. It can be

implemented using combinations of the scanning types, including NetFlow, protocol, and packet analyses, to create a baseline and subsequently report departures from the traffic metrics found in the baseline. One of the newer advances in this field is the development of user and entity behavior analytics (UEBA). This type of analysis focuses on user activities. Combining behavior analysis with machine learning, UEBA enhances the ability to determine which particular users are behaving oddly. An example would be a hacker who has stolen credentials of a user and is identified by the system because he is not performing the same activities that the user would perform. Heuristics is a method used in malware detection, behavioral analysis, incident detection, and other scenarios in which patterns must be detected in the midst of what might appear to be chaos. It is a process that ranks alternatives using search algorithms, and although it is not an exact science and is somewhat a form of “guessing,” it has been shown in many cases to approximate an exact solution. Heuristics also includes a process of self-learning through trial and error as it arrives at the final approximated solution. Many IPS, IDS and antimalware systems that include heuristics capabilities can often detect so-called zero-day issues using this technique. Indicator of Compromise (IoC) An indicator of compromise (IoC) is any activity, artifact, or log entry that is typically associated with an attack of some sort. Typical examples include the following:

Virus signatures Known malicious file types Domain names of known botnet servers

Known IoCs are exchanged within the security industry, using the Traffic Light Protocol (TLP) to classify the IoCs. TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. Somewhat analogous to a traffic light, it employs four colors to indicate expected sharing boundaries to be applied by the recipient. Common Vulnerability Scoring System (CVSS) The Common Vulnerability Scoring System (CVSS) version 3.1 is a system of ranking vulnerabilities that are discovered based on predefined metrics. This system ensures that the most critical vulnerabilities can be easily identified and addressed after a vulnerability test is met. Most commercial vulnerability management tools use CVSS scores as a baseline. Scores are awarded on a scale of 0 to 10, with the values having the following ranks: Note The Forum of Incident Response and Security Teams (FIRST) is the custodian of CVSS 3.1. 0: No issues 0.1 to 3.9: Low 4.0 to 6.9: Medium 7.0 to 8.9: High 9.0 to 10.0: Critical

CVSS is composed of three metric groups:

Base: Characteristics of a vulnerability that are constant over time and user environments Temporal: Characteristics of a vulnerability that change over time but not among user environments

Environmental: Characteristics of a vulnerability that are relevant and unique to a particular user’s environment

The Base metric group includes the following metrics:

Attack Vector (AV): Describes how the attacker would exploit the vulnerability and has four possible values: L: Stands for Local and means that the attacker must have physical or logical access to the affected system A: Stands for Adjacent network and means that the attacker must be on the local network N: Stands for Network and means that the attacker can cause the vulnerability from any network P: Stands for Physical and requires the attacker to physically touch or manipulate the vulnerable component Attack Complexity (AC): Describes the difficulty of exploiting the vulnerability and has three possible values: H: Stands for High and means that the vulnerability requires special conditions that are hard to find L: Stands for Low and means that the vulnerability does not require special conditions Privileges Required (Pr): Describes the authentication an attacker would need to get through to exploit the vulnerability and has three possible values: H: Stands for High and means the attacker requires privileges that provide significant (e.g., administrative) control over the vulnerable component allowing access to component-wide settings and files L: Stands for Low and means the attacker requires privileges that provide basic user capabilities that could normally affect only settings and files owned by a user

N: Stands for None and means that no authentication mechanisms are in place to stop the exploit of the vulnerability User Interaction (UI): Captures the requirement for a human user, other than the attacker, to participate in the successful compromise of the vulnerable component. N: Stands for None and means the vulnerable system can be exploited without interaction from any user R: Stands for Required and means successful exploitation of this vulnerability requires a user to take some action before the vulnerability can be exploited Scope (S): Captures whether a vulnerability in one vulnerable component impacts resources in components beyond its security scope. U: Stands for Unchanged and means the exploited vulnerability can only affect resources managed by the same security authority C: Stands for Changed and means that the exploited vulnerability can affect resources beyond the security scope managed by the security authority of the vulnerable component

The Impact metric group includes the following metrics: Availability (A): Describes the disruption that might occur if the vulnerability is exploited and has three possible values: N: Stands for None and means that there is no availability impact L: Stands for Low and means that system performance is degraded H: Stands for High and means that the system is completely shut down Confidentiality (C): Describes the information disclosure that may occur if the vulnerability is exploited and has three possible values:

N: Stands for None and means that there is no confidentiality impact L: Stands for Low and means some access to information would occur H: Stands for High and means all information on the system could be compromised Integrity (I): Describes the type of data alteration that might occur and has three possible values: N: Stands for None and means that there is no integrity impact L: Stands for Low and means some information modification would occur H: Stands for High and means all information on the system could be compromised

The CVSS vector looks something like this: CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/C:L/I:N/A:N This vector is read as follows:

AV:L: Access vector, where L stands for Local and means that the attacker must have physical or logical access to the affected system AC:H: Attack complexity, where H stands for stands for High and means that the vulnerability requires special conditions that are hard to find Pr:L: Privileges Required, where L stands for Low and means the attacker requires privileges that provide basic user capabilities that could normally affect only settings and files owned by a user UI:R: User Interaction, where R stands for Required and means successful exploitation of this vulnerability requires a user to take some action before the vulnerability can be exploited S:U: Scope, where U stands for Unchanged and means the exploited vulnerability can only affect resources managed by the

same security authority C:L: Confidentiality, where L stands for Low and means that some access to information would occur I:N: Integrity, where N stands for None and means that there is no integrity impact A:N: Availability, where N stands for None and means that there is no availability impact

For more information, see https://www.first.org/cvss/v31/cvss-v31-specification_r1.pdf. Note For access to CVVS calculators, see the following resources: CVSS Scoring System Calculator: https://nvd.nist.gov/vulnmetrics/cvss/v3-calculator?calculator&adv&version=2 CVSS Version 3.1 Calculator: https://www.first.org/cvss/calculator/3.1

THREAT MODELING METHODOLOGIES An organization should have a well-defined risk management process in place that includes the evaluation of risk that is present. When this process is carried out properly, a threat modeling methodology allows organizations to identify threats and potential attacks and implement the appropriate mitigations against these threats and attacks. These facets ensure that security controls that are implemented are in balance with the operations of the organization. There are a number of factors to consider in a threat modeling methodology that will be covered in the following section. Adversary Capability First, you must have a grasp of the capabilities of the attacker. Threat actors have widely varying capabilities. When carrying out threat modeling, you may decide to develop a more

comprehensive list of threat actors to help in scenario development. Security professionals should analyze all the threats to identify all the actors who pose significant threats to the organization. Examples of the threat actors include both internal and external actors and include the following:

Internal actors: Reckless employee Untrained employee Partner Disgruntled employee Internal spy Government spy Vendor Thief External actors: Anarchist Competitor Corrupt government official Data miner Government cyber warrior Irrational individual Legal adversary Mobster Activist Terrorist

Vandal

These actors can be subdivided into two categories: non-hostile and hostile. In the preceding lists, three actors are usually considered non-hostile: reckless employee, untrained employee, and partner. All the other actors should be considered hostile. The organization would then need to analyze each of these threat actors according to set criteria. All threat actors should be given a ranking to help determine which threat actors need to be analyzed. Examples of some of the most commonly used criteria include the following: Skill level: None, minimal, operational, adept Resources: Individual, team, organization, government Limits: Code of conduct, legal, extra-legal (minor), extra-legal (major) Visibility: Overt, covert, clandestine, don’t care Objective: Copy, destroy, injure, take, don’t care Outcome: Acquisition/theft, business advantage, damage, embarrassment, technical advantage

With these criteria, the organization must then determine which of the actors it wants to analyze. For example, the organization may choose to analyze all hostile actors that have a skill level of adept, resources of an organization or government, and limits of extra-legal (minor) or extra-legal (major). Then the list is consolidated to include only the threat actors that fit all of these criteria. Total Attack Surface The total attack surface comprises all the points at which vulnerabilities exist. It is critical that the organization have a clear understanding of the total attack surface. Otherwise, it is somewhat like locking all the doors of which one is aware while

several doors exist of which one is not aware. The result is unlocked doors. Identifying the attack surface should be a formalized process that arrives at a complete list of vulnerabilities. Only then can each vulnerability be addressed properly with security controls, processes, and procedures. To identify the potential attacks that could occur, an organization must create scenarios so that each potential attack can be fully analyzed. For example, an organization may decide to analyze a situation in which a hacktivist group performs prolonged denial-of-service attacks, causing sustained outages intended to damage the organization’s reputation. The organization then must make a risk determination for each scenario. Once all the scenarios are determined, the organization develops an attack tree for each potential attack. Such an attack tree includes all the steps and/or conditions that must occur for the attack to be successful. The organization then maps security controls to the attack trees. To determine the security controls that can be used, the organization would need to look at industry standards, including NIST SP 800-53 (revision 4 at the time of writing). Finally, the organization would map controls back into the attack tree to ensure that controls are implemented at as many levels of the attack surface as possible. Note For more information on NIST SP 800-53, see https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf.

Attack Vector

An attack vector is the path or means with which the attack is carried out. Some examples of attack vectors include the following:

Phishing Malware Exploit unpatched vulnerabilities Code injection Social engineering Advanced persistent threats (APTs)

Once attack vectors and attack agents have been identified, the organization must assess the relative impact and likelihood of such attacks. This allows the organization to prioritize the limited resources available to address the vulnerabilities. Impact Once all assets have been identified and their value to the organization has been established, the organization must identify impact to each asset. An attempt must be made to establish the impact to the organization should that occur. While both quantitative and qualitative risk assessments may be performed, when a qualitative assessment is conducted, the risks are placed into the following categories: High Medium Low

Typically a risk assessment matrix is created, such as the one shown in Figure 2-3. Subject matter experts grade all risks on

their likelihood and their impact. This helps to prioritize the application of resources to the most critical vulnerabilities.

Figure 2-3 Risk Assessment Matrix Once the organization determines what it really cares about protecting, the organization should then select the scenarios that could have a catastrophic impact on the organization by using the objective and outcome values from the adversary capability analysis and the asset value and business impact information from the impact analysis. Probability When performing the assessment mentioned in the previous section, the organization must also consider the probability that each security event occurs; note in Figure 2-3 that one axis of the risk matrix is impact and the other is probability.

THREAT INTELLIGENCE SHARING WITH SUPPORTED FUNCTIONS Earlier we looked at the importance of sharing intelligence information with other organizations. It is also critical that such information be shared with all departments that perform

various security functions. Although an organization might not have a separate group for each of the areas covered in the sections that follow, security professionals should ensure that the latest threat data is made available to all functional units that participate in these activities. Incident Response Incident response will be covered more completely in Chapter 15, “The Incident Response Process,” but here it is important to point out that properly responding to security incidents requires knowledge of what may be occurring, and that requires a knowledge of the very latest threats and how those threats are realized. Therefore, members who are trained in the incident response process should also be kept up to date on the latest threat vectors by giving them access to all threat intelligence that has been collected through any sharing arrangements. Vulnerability Management Vulnerability management will be covered in Chapter 5, “Vulnerabilities Associated with Specialized Technology,” and Chapter 6, “Threats and Vulnerabilities Associated with Operating in the Cloud,” but here it is important to point out that there is no function that depends so heavily on shared intelligence information as vulnerability management. When sharing platforms and protocols are used to identify new threats, this data must be shared in a timely manner with those managing vulnerabilities. Risk Management Risk management will be addressed in Chapter 20, “Applying Security Concepts in Support of Organizational Risk Mitigation.” It is a formal process that rates identified vulnerabilities by the likelihood of their compromise and the impact of said compromise. Because this process is based on

complete and thorough vulnerability identification, speedy sharing of any new threat intelligence is critical to the vulnerability management process on which risk management depends. Security Engineering Security engineering is the process of architecting security features into the design of a system or set of systems. It has as its goal an emphasis on security from the ground up, sometimes stated as “building in security.” Unless the very latest threats are shared with this function, engineers cannot be expected to build in features that prevent threats from being realized. Detection and Monitoring Finally, those who are responsible for monitoring and detecting attacks also benefit greatly from timely sharing of threat intelligence data. Without this, indicators of compromise cannot be developed and utilized to identify the new threats in time to stop them from causing breaches.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 2-3 lists a reference of these key topics and the page number on which each is found.

Table 2-3 Key Topics in Chapter 2

Key Topic Element

Description

Page Number

Section

Diamond Model

22

Figure 2-2

Kill chain

23

Bulleted list

Example indicators of compromise

25

Bulleted list

CVSS metric groups

26

Bulleted list

Base metric group metrics

26

Bulleted list

CVSS vector readings

28

Bulleted list

Threat actors

29

Bulleted list

Example attack vectors

31

Figure 2-3

Risk assessment matrix

32

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: attack frameworks MITRE ATT&CK Diamond Model of Intrusion Analysis adversary capability

infrastructure victim kill chain heuristics indicator of compromise (IoC) Common Vulnerability Scoring System (CVSS) Attack Vector (AV) Attack Complexity (AC) Privileges Required (Pr) Availability (A) Confidentiality (C) Integrity (I) risk management threat modeling methodology total attack surface incident response threat intelligence vulnerability management security engineering

REVIEW QUESTIONS 1. Match each corner of the Diamond Model with its description.

Corner

Descriptions

Adversary

Describes attacker intrusion tools and techniques

Victim

Describes the target or targets

Capability

Describes the set of systems an attacker uses to launch attacks

Infrastruct

Describes the intent of the attack

ure

2. The _______________ corner of the Diamond Model focuses on the intent of the attack. 3. What type of threat data describes a source that repeatedly sends large amounts of traffic to a single IP address? 4. _________________ is any activity, artifact, or log entry that is typically associated with an attack of some sort. 5. Give at least two examples of an IoC. 6. Match each acronym with its description

Acrony m

Description

TLP

System of ranking vulnerabilities that are discovered based on predefined metrics

MITR E ATT& CK

Any activity, artifact, or log entry that is typically associated with an attack of some sort

CVSS

Knowledge base of adversary tactics and techniques based on real-world observations

IoC

Set of designations used to ensure that sensitive information is shared with the appropriate audience

7. In the following CVSS vector, what does the Pr:L designate? CVSS2#AV:L/AC:H/Pr:L/UI:R/S:U/C:L/I:N/A:N 8. The _________________CVSS metric group describes characteristics of a vulnerability that are constant over time

and user environments. 9. The ____________ CVSS base metric describes how the attacker would exploit the vulnerability. 10. Match each CVSS attack vector value with its description.

Val ue

Description

P

Means the attack requires the attacker to physically touch or manipulate the vulnerable component

L

Means that the attacker can cause the vulnerability from any network

N

Means that the attacker must be on the local network

A

Means that the attacker must have physical or logical access to the affected system

Chapter 3

Vulnerability Management Activities This chapter covers the following topics related to Objective 1.3 (Given a scenario, perform vulnerability management activities) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Vulnerability identification: Explores asset criticality, active vs. passive scanning, and mapping/enumeration. Validation: Covers true positive, false positive, true negative, and false negative alerts. Remediation/mitigation: Describes configuration baseline, patching, hardening, compensating controls, risk acceptance, and verification of mitigation. Scanning parameters and criteria: Explains risks associated with scanning activities, vulnerability feed, scope, credentialed vs. non-credentialed scans, server-based vs. agent-based scans, internal vs. external scans, and special considerations including types of data, technical constraints, workflow, sensitivity levels, regulatory requirements, segmentation, intrusion prevention system (IPS), intrusion detection system (IDS), and firewall settings. Inhibitors to remediation: Covers memorandum of understanding (MOU), service-level agreement (SLA), organizational governance, business process interruption, degrading functionality, legacy systems, and proprietary systems.

Managing vulnerabilities requires more than a casual approach. There are certain processes and activities that should occur to ensure that your management of vulnerabilities is as robust as it

can be. This chapter describes the activities that should be performed to manage vulnerabilities.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these five self-assessment questions, you might want to move ahead to the “Exam Preparation Tasks” section. Table 3-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so you that can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 3-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Vulnerability Identification

1

Validation

2

Remediation/Mitigation

3

Scanning Parameters and Criteria

4

Inhibitors to Remediation

5

1. Which of the following helps to identify the number and type of resources that should be devoted to a security issue? 1. Specific threats that are applicable to the component

2. Mitigation strategies that could be used 3. The relative value of the information that could be discovered 4. The organizational culture

2. Which of the following occurs when the scanner correctly identifies a vulnerability? 1. True positive 2. False positive 3. False negative 4. True negative

3. Which of the following is the first step of the patch management process? 1. Determine the priority of the patches 2. Install the patches 3. Test the patches 4. Ensure that the patches work properly

4. Which of the following is not a risk associated with scanning activities? 1. False sense of security can be introduced 2. Does not itself reduce your risk 3. Only as valid as the latest scanner update 4. Distracts from day-to-day operations

5. Which of the following is a document that, while not legally binding, indicates a general agreement between the principals to do something together? 1. SLA 2. MOU 3. ICA

4. SCA

FOUNDATION TOPICS VULNERABILITY IDENTIFICATION Vulnerabilities must be identified before they can be mitigated by applying security controls or countermeasures. Vulnerability identification is typically done through a formal process called a vulnerability assessment, which works hand in hand with another process called risk management. The vulnerability assessment identifies and assesses the vulnerabilities, and the risk management process goes a step further and identifies the assets at risk and assigns a risk value (derived from both the impact and likelihood) to each asset. Regardless of the components under study (network, application, database, etc.), any vulnerability assessment’s goal is to highlight issues before someone either purposefully or inadvertently leverages the issue to compromise the component. The design of the assessment process has a great impact on its success. Before an assessment process is developed, the following goals of the assessment need to be identified:

The relative value of the information that could be discovered through the compromise of the components under assessment: This helps to identify the number and type of resources that should be devoted to the issue. The specific threats that are applicable to the component: For example, a web application would not be exposed to the same issues as a firewall because their operation and positions in the network differ. The mitigation strategies that could be deployed to address issues that might be found: Identifying common strategies can suggest issues that weren’t anticipated initially. For

example, if you were doing a vulnerability test of your standard network operating system image, you should anticipate issues you might find and identify what technique you will use to address each.

A security analyst who will be performing a vulnerability assessment needs to understand the systems and devices that are on the network and the jobs they perform. Having this knowledge will ensure that the analyst can assess the vulnerabilities of the systems and devices based on the known and potential threats to the systems and devices. After gaining knowledge regarding the systems and device, a security analyst should examine existing controls in place and identify any threats against those controls. The security analyst then uses all the information gathered to determine which automated tools to use to analyze for vulnerabilities. After the vulnerability analysis is complete, the security analyst should verify the results to ensure that they are accurate and then report the findings to management, with suggestions for remedial action. With this information in hand, the threat analyst should carry out threat modeling to identify the threats that could negatively affect systems and devices and the attack methods that could be used. In some situations, a vulnerability management system may be indicated. A vulnerability management system is software that centralizes and, to a certain extent, automates the process of continually monitoring and testing the network for vulnerabilities. Such a system can scan the network for vulnerabilities, report them, and, in many cases, remediate the problem without human intervention. While a vulnerability management system is a valuable tool to have, these systems, regardless of how sophisticated they may be, cannot take the place of vulnerability and penetration testing performed by trained professionals.

Keep in mind that after a vulnerability assessment is complete, its findings are only a snapshot in time. Even if no vulnerabilities are found, the best statement to describe the situation is “there are no known vulnerabilities at this time.” It is impossible to say with certainty that a vulnerability will not be discovered in the near future. Asset Criticality Assets should be classified based on their value to the organization and their sensitivity to disclosure. Assigning a value to data and assets enables an organization to determine the resources that should be used to protect them. Resources that are used to protect data include personnel resources, monetary resources, access control resources, and so on. Classifying assets enables you to apply different protective measures. Asset classification is critical to all systems to protect the confidentiality, integrity, and availability (CIA) of the asset. After assets are classified, they can be segmented based on the level of protection needed. The classification levels ensure that assets are protected in the most cost-effective manner possible. The assets could then be configured to ensure they are isolated or protected based on these classification levels. An organization should determine the classification levels it uses based on the needs of the organization. A number of privatesector classifications and military and government information classifications are commonly used. The information life cycle should also be based on the classification of the assets. In the case of data assets, organizations are required to retain certain information, particularly financial data, based on local, state, or government laws and regulations. Sensitivity is a measure of how freely data can be handled. Data sensitivity is one factor in determining asset criticality. For example, a particular server stores highly sensitive data and

therefore needs to be identified as a high criticality asset. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. Some data is also subject to regulation by state or federal laws that require notification in the event of a disclosure. Data is assigned a level of sensitivity based on who should have access to it and how much harm would be done if it were disclosed. This assignment of sensitivity is called data classification. Criticality is a measure of the importance of the data. Data that is considered sensitive might not necessarily be considered critical. Assigning a level of criticality to a particular data set requires considering the answers to a few questions:

Will you be able to recover the data in case of disaster? How long will it take to recover the data? What is the effect of this downtime, including loss of public standing?

Data is considered essential when it is critical to the organization’s business. When essential data is not available, even for a brief period of time, or when its integrity is questionable, the organization is unable to function. Data is considered required when it is important to the organization but organizational operations would continue for a predetermined period of time even if the data were not available. Data is nonessential if the organization can operate without it during extended periods of time. Active vs. Passive Scanning Network vulnerability scans probe a targeted system or network to identify vulnerabilities. The tools used in this type of scan

contain a database of known vulnerabilities and identify whether a specific vulnerability exists on each device. There are two types of vulnerability scanning:

Passive vulnerability scanning: Passive vulnerability scanning collects information but doesn’t take any action to block an attack. A passive vulnerability scanner (PVS) monitors network traffic at the packet layer to determine topology, services, and vulnerabilities. It avoids the instability that can be introduced to a system by actively scanning for vulnerabilities. PVS tools analyze the packet stream and look for vulnerabilities through direct analysis. They are deployed in much the same way as intrusion detection systems (IDSs) or packet analyzers. A PVS can pick a network session that targets a protected server and monitor it as much as needed. The biggest benefit of a PVS is its capability to do its work without impacting the monitored network. Some examples of PVSs are the Nessus Network Monitor (formerly Tenable PVS) and NetScanTools Pro. Active vulnerability scanning: Active vulnerability scanning collects information and attempts to block the attack. Whereas passive scanners can only gather information, active vulnerability scanners (AVSs) can take action to block an attack, such as block a dangerous IP address. AVSs can also be used to simulate an attack to assess readiness. They operate by sending transmissions to nodes and examining the responses. Because of this, these scanners may disrupt network traffic. Examples include Nessus Professional and OpenVAS.

Regardless of whether it’s active or passive, a vulnerability scanner cannot replace the expertise of trained security personnel. Moreover, these scanners are only as effective as the signature databases on which they depend, so the databases must be updated regularly. Finally, because scanners require bandwidth, they potentially slow the network. For best performance, you can place a vulnerability scanner in a subnet that needs to be protected. You can also connect a scanner through a firewall to multiple subnets; this complicates the

configuration and requires opening ports on the firewall, which could be problematic and could impact the performance of the firewall. Mapping/Enumeration Vulnerability mapping and enumeration is the process of identifying and listing vulnerabilities. In Chapter 2, “Utilizing Threat Intelligence to Support Organizational Security,” you were introduced to the Common Vulnerability Scoring System (CVSS). A closely related concept is the Common Weakness Enumeration (CWE), a category system for software weaknesses and vulnerabilities. CWE organizes vulnerabilities into over 600 categories, including classes for buffer overflows, path/directory tree traversal errors, race conditions, cross-site scripting, hardcoded passwords, and insecure random numbers. CWE is only one of a number of enumerations that are used by Security Content Automation Protocol (SCAP), a standard that the security community uses to enumerate software flaws and configuration issues. SCAP will be covered more fully in Chapter 14, “Automation Concepts and Technologies.”

VALIDATION Scanning results are not always correct. Scanning tools can make mistakes identifying vulnerabilities. There are four types of results a scanner can deliver:

True positive: Occurs when the scanner correctly identifies a vulnerability. True means the scanner is correct and positive means it identified a vulnerability. False positive: Occurs when the scanner identifies a vulnerability that does not exist. False mean the scanner is incorrect and positive means it identified a vulnerability. Lots of false positives reduces confidence in scanning results.

True negative: Occurs when the scanner correctly determines that a vulnerability does not exist. True means the scanner is correct and negative means it did not identify a vulnerability. False negative: Occurs when the scanner does not identity a vulnerability that actually exists. False means the scanner is wrong and negative means it did not find a vulnerability. This is worse than a false positive because it means that a vulnerability exists that you are unaware of.

REMEDIATION/MITIGATION When vulnerabilities are identified, security professionals must take steps to address them. One of the outputs of a good risk management process is the prioritization of the vulnerabilities and an assessment of the impact and likelihood of each. Driven by those results, security measures (also called controls or countermeasures) can be put in place to reduce risk. Let’s look at some issues relevant to vulnerability mitigation. Configuration Baseline A baseline is a floor or minimum standard that is required. With respect to configuration baselines, they are security settings that are required on devices of various types. These settings should be driven by results of vulnerability and risk management processes. One practice that can make maintaining security simpler is to create and deploy standard images that have been secured with security baselines. A security baseline is a set of configuration settings that provide a floor of minimum security in the image being deployed. Security baselines can be controlled through the use of Group Policy in Windows. These policy settings can be made in the image and applied to both users and computers. These settings are refreshed periodically through a connection to a domain controller and cannot be altered by the user. It is also quite

common for the deployment image to include all of the most current operating system updates and patches as well. This creates consistency across devices and helps prevent security issues caused by human error in configuration. When a network makes use of these types of technologies, the administrators have created a standard operating environment. The advantages of such an environment are more consistent behavior of the network and simpler support issues. Scans should be performed of the systems weekly to detect changes to the baseline. Security professionals should help guide their organization through the process of establishing the security baselines. If an organization implements very strict baselines, it will provide a higher level of security but can actually be too restrictive. If an organization implements a very lax baseline, it will provide a lower level of security and will likely result in security breaches. Security professionals should understand the balance between protecting the organizational assets and allowing users access and should work to ensure that both ends of this spectrum are understood. Patching Patch management, or patching, is often seen as a subset of configuration management. Software patches are updates released by vendors that either fix functional issues with or close security loopholes in operating systems, applications, and versions of firmware that run on network devices. To ensure that all devices have the latest patches installed, you should deploy a formal system to ensure that all systems receive the latest updates after thorough testing in a non-production environment. It is impossible for a vendor to anticipate every possible impact a change might have on business-critical

systems in a network. The enterprise is responsible for ensuring that patches do not adversely impact operations. The patch management life cycle includes the following steps:

Step 1. Determine the priority of the patches and schedule the patches for deployment. Step 2. Test the patches prior to deployment to ensure that they work properly and do not cause system or security issues. Step 3. Install the patches in the live environment. Step 4. After the patches are deployed, ensure that they work properly. Many organizations deploy a centralized patch management system to ensure that patching is deployed in a timely manner. With this system, administrators can test and review all patches before deploying them to the systems they affect. Administrators can schedule the updates to occur during nonpeak hours. Hardening Another of the ongoing goals of operations security is to ensure that all systems have been hardened to the extent that is possible and still provide functionality. The hardening can be accomplished both on physical and logical bases. From a logical perspective:

Remove unnecessary applications. Disable unnecessary services. Block unrequired ports.

Tightly control the connecting of external storage devices and media (if it’s allowed at all).

Compensating Controls Not all vulnerabilities can be eliminated. In some cases, they can only be mitigated. This can be done by implementing compensating controls (also known as countermeasures or safeguards) that compensate for a vulnerability that cannot be completely eliminated by reducing the potential risk of that vulnerability being exploited. Three things must be considered when implementing a compensating control: vulnerability, threat, and risk. For example, a good compensating control might be to implement the appropriate access control list (ACL) and encrypt the data. The ACL protects the integrity of the data, and the encryption protects the confidentiality of the data. Note For more information on compensating controls, see http://pcidsscompliance.net/overview/what-are-compensating-controls/.

Risk Acceptance You learned about risk management in Chapter 2. Part of the risk management process is deciding how to address a vulnerability. There are several ways to react. Risk reduction is the process of altering elements of the organization in response to risk analysis. After an organization understands its risk, it must determine how to handle the risk. The following four basic methods are used to handle risk:

Risk avoidance: Terminating the activity that causes a risk or choosing an alternative that is not as risky Risk transfer: Passing on the risk to a third party, such as an insurance company

Risk mitigation: Defining the acceptable risk level the organization can tolerate and reducing the risk to that level Risk acceptance: Understanding and accepting the level of risk as well as the cost of damages that can occur

Verification of Mitigation Once a threat has been remediated, you should verify that the mitigation has solved the issue. You should also take steps to ensure that all is back to its normal secure state. These steps validate that you are finished and can move on to taking corrective actions with respect to the lessons learned.

Patching: In many cases, a threat or an attack is made possible by missing security patches. You should update or at least check for updates for a variety of components. This includes all patches for the operating system, updates for any applications that are running, and updates to all anti-malware software that is installed. While you are at it, check for any firmware update the device may require. This is especially true of hardware security devices such as firewalls, IDSs, and IPSs. If any routers or switches are compromised, check for software and firmware updates. Permissions: Many times an attacker compromises a device by altering the permissions, either in the local database or in entries related to the device in the directory service server. All permissions should undergo a review to ensure that all are in the appropriate state. The appropriate state might not be the state they were in before the event. Sometimes you may discover that although permissions were not set in a dangerous way prior to an event, they are not correct. Make sure to check the configuration database to ensure that settings match prescribed settings. You should also make changes to the permissions based on lessons learned during an event. In that case, ensure that the new settings undergo a change control review and that any approved changes are reflected in the configuration database. Scanning: Even after you have taken all steps described thus far, consider using a vulnerability scanner to scan the devices or the network of devices that were affected. Make sure before you do so

that you have updated the scanner so it can recognize the latest vulnerabilities and threats. This will help catch any lingering vulnerabilities that might still be present. Verify logging/communication to security monitoring: To ensure that you will have good security data going forward, you need to ensure that all logs related to security are collecting data. Pay special attention to the manner in which the logs react when full. With some settings, the log begins to overwrite older entries with new entries. With other settings, the service stops collecting events when the log is full. Security log entries need to be preserved. This may require manual archiving of the logs and subsequent clearing of the logs. Some logs make this possible automatically, whereas others require a script. If all else fails, check the log often to assess its state. Many organizations send all security logs to a central location. This could be a Syslog server, or it could be a security information and event management (SIEM) system. These systems not only collect all the logs but also use the information to make inferences about possible attacks. Having access to all logs enables the system to correlate all the data from all responding devices. Regardless of whether you are logging to a Syslog server or a SIEM system, you should verify that all communications between the devices and the central server are occurring without a hitch. This is especially true if you had to rebuild the system manually rather than restore from an image, as there would be more opportunity for human error in the rebuilding of the device.

SCANNING PARAMETERS AND CRITERIA Scanning is the process of using scanning tools to identity security issues. Typical issues discovered include missing patches, weak passwords, and insecure configurations. While types of scanning are covered in Chapter 4, “Analyzing Assessment Output,” let’s look at some issues and considerations supporting the process. Risks Associated with Scanning Activities While vulnerability scanning is an advisable and valid process, there are some risks to note:

A false sense of security can be introduced because scans are not error free. Many tools rely on a database of known vulnerabilities and are only as valid as the latest update. Identifying vulnerabilities does not in and of itself reduce your risk or improve your security.

Vulnerability Feed Vulnerability feeds are RSS feeds dedicated to the sharing of information about the latest vulnerabilities. Subscribing to these feeds can enhance the knowledge of the scanning team and can keep the team abreast of the latest issues. For example, the National Vulnerability Database is the U.S. government repository of standards-based vulnerability management data represented using the Security Content Automation Protocol (SCAP) (covered in Chapter 14). Scope The scope of a scan defines what will be scanned and what type of scan will be performed. It defines what areas of the infrastructure will be scanned, and this part of the scope should therefore be driven by where the assets of concern are located. Limiting the scan areas helps ensure that accidental scanning of assets and devices not under the direct control of the organization does not occur (because it could cause legal issues). Scope might also include times of day when scanning should not occur. In the OpenVAS vulnerability scanner, you can set the scope by setting the plug-ins and the targets. Plug-ins define the scans to be performed, and targets specify the machines. Figure 3-1 shows where plug-ins are chosen, and Figure 3-2 shows where the targets are set.

Figure 3-1 Selecting Plug-ins in OpenVAS

Figure 3-2 Selecting Targets in OpenVAS

Credentialed vs. Non-credentialed Another decision that needs to be made before performing a vulnerability scan is whether to perform a credentialed scan or a non-credentialed scans. A credentialed scan is a scan that is performed by someone with administrative rights to the host being scanned, while a non-credentialed scan is performed by someone lacking these rights. Non-credentialed scans generally run faster and require less setup but do not generate the same quality of information as a credentialed scan. This is because credentialed scans can enumerate information from the host itself, whereas noncredentialed scans can only look at ports and only enumerate software that will respond on a specific port. Credentialed scanning also has the following benefits:

Operations are executed on the host itself rather than across the network. A more definitive list of missing patches is provided. Client-side software vulnerabilities are uncovered. A credentialed scan can read password policies, obtain a list of USB devices, check antivirus software configurations, and even enumerate Bluetooth devices attached to scanned hosts.

Figure 3-3 shows that when you create a new scan policy in Nessus, one of the available steps is to set credentials. Here you can see that Windows credentials are chosen as the type, and the SMB account and password are set.

Figure 3-3 Setting Credentials for a Scan in Nessus Server-based vs. Agent-based Vulnerability scanners can use agents that are installed on the devices, or they can be agentless. While many vendors argue that using agents is always best, there are advantages and disadvantages to both, as presented in Table 3-2.

Table 3-2 Server-Based vs. Agent-Based Scanning

Type

Agen t base d

Technolo gy Pull technol ogy

Characteristics

Can get information from disconnected machines or machines in the DMZ Ideal for remote locations that have limited bandwidth Less dependent on network connectivity Based on policies defined on the central console

Serve r base d

Push technol ogy

Good for networks with plentiful bandwidth Dependent on network connectivity Central authority does all the scanning and deployment

Some scanners can do both agent-based and server-based scanning (also called agentless or sensor-based scanning). For example, Figure 3-4 shows the Nessus templates library with both categories of templates available.

Figure 3-4 Nessus Template Library Internal vs. External Scans can be performed from within the network perimeter or from outside the perimeter. This choice has a big effect on the results and their interpretation. Typically the type of scan is driven by what the tester is looking for. If the tester’s area of interest is vulnerabilities that can be leveraged from outside the

perimeter to penetrate the perimeter, then an external scan is in order. In this type of scan, either the sensors of the appliance are placed outside the perimeter or, in the case of software running on a device, the device itself is placed outside the perimeter. On the other hand, if the tester’s area of interest is vulnerabilities that exist within the perimeter—that is, vulnerabilities that could be leveraged by outsiders who have penetrated the perimeter or by malicious insiders (your own people)—then an internal scan is indicated. In this case, either the sensors of the appliance are placed inside the perimeter or, in the case of software running on a device, the device itself is placed inside the perimeter. Special Considerations Just as the requirements of the vulnerability management program were defined in the beginning of the process, scanning criteria must be settled upon before scanning begins. This will ensure that the proper data is generated and that the conditions under which the data will be collected are well understood. This will result in a better understanding of the context in which the data was obtained and better analysis. Some of the criteria that might be considered are described in the following sections. Types of Data The types of data with which you are concerned should have an effect on how you run the scan. Many tools offer the capability to focus on certain types of vulnerabilities that relate specifically to certain data types. Technical Constraints In some cases the scan will be affected by technical constraints. Perhaps the way in which you have segmented the network caused you to have to run the scan multiple times from various

locations in the network. You will also be limited by the technical capabilities of the scan tool you use. Workflow Workflow can also influence the scan. You might be limited to running scans at certain times because it negatively affects workflow. While security is important it isn’t helpful if it detracts from business processes that keep the organization in business. Sensitivity Levels Scanning tools have sensitivity level settings that impact both the number of results and the tool’s judgment of the results. Most systems assign a default severity level to each vulnerability. In some cases, security analysts may find that certain events that the system is tagging as vulnerabilities are actually not vulnerabilities but that the system has mischaracterized them. In other cases, an event might be a vulnerability but the severity level assigned is too extreme or not extreme enough. In that case the analyst can either dismiss the vulnerability, which means the system stops reporting it, or manually define a severity level for the event that is more appropriate. Keep in mind that these systems are not perfect. Sensitivity also refers to how deeply a scan probes each host. Scanning tools have templates that can be used to perform certain types of scans. These are two of the most common templates in use: Discovery scans: These scans are typically used to create an asset inventory of all hosts and all available services. Assessment scans: These scans are more comprehensive than discovery scans and can identify misconfigurations, malware, application settings that are against policy, and weak passwords. These scans have a significant impact on the scanned device.

Figure 3-5 shows the All Templates page in Nessus, with scanning templates like the ones just discussed.

Figure 3-5 Scanning Templates in Nessus Regulatory Requirements Does the organization operate in an industry that is regulated? If so, all regulatory requirements must be recorded, and the vulnerability assessment must be designed to support all requirements. The following are some examples of industries in which security requirements exist: Finance (for example, banks and brokerages) Medical (for example, hospitals, clinics, and insurance companies) Retail (for example, credit card and customer information)

Legislation such as the following can affect organizations operating in these industries:

Sarbanes-Oxley Act (SOX): The Public Company Accounting Reform and Investor Protection Act of 2002, more commonly known as the Sarbanes-Oxley Act (SOX), affects any organization

that is publicly traded in the United States. It controls the accounting methods and financial reporting for the organizations and stipulates penalties and even jail time for executive officers who fail to comply with its requirements. Health Insurance Portability and Accountability Act (HIPAA): HIPAA, also known as the Kennedy-Kassebaum Act, affects all healthcare facilities, health insurance companies, and healthcare clearing houses. It is enforced by the Office of Civil Rights (OCR) of the Department of Health and Human Services (HHS). It provides standards and procedures for storing, using, and transmitting medical information and healthcare data. HIPAA overrides state laws unless the state laws are stricter. This act directly affects the security of protected health information (PHI). Gramm-Leach-Bliley Act (GLBA) of 1999: The GrammLeach-Bliley Act (GLBA) of 1999 affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers. It provides guidelines for securing all financial information and prohibits sharing financial information with third parties. This act directly affects the security of personally identifiable information (PII). Payment Card Industry Data Security Standard (PCI DSS): PCI DSS v3.2.1, developed in 2019, is the latest version of the PCI DSS standard as of this writing. It encourages and enhances cardholder data security and facilitates the broad adoption of consistent data security measured globally. Table 3-3 shows a highlevel overview of the PCI DSS standard.

Table 3-3 High-Level Overview of PCI DSS

Build and Maintain a Secure Network and Systems

1. Install and maintain a firewall configuration to protect cardholder data 2. Do not use vendor-supplied defaults for system passwords and other security parameters

Protect Cardholder Data

3. Protect stored cardholder data 4. Encrypt transmission of cardholder data across open, public networks

Maintain a Vulnerability Management Program

5. Protect all systems against malware and regularly update antivirus software or programs 6. Develop and maintain secure systems and applications

Implement Strong Access Control Measures

7. Restrict access to cardholder data by business need to know 8. Identify and authenticate access to system components 9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks

10. Track and monitor all access to network resources and cardholder data 11. Regularly test security systems and processes

Maintain an Information Security Policy

12. Maintain a policy that addresses information security for all personnel

Segmentation Segmentation is the process of dividing a network at either Layer 2 or Layer 3. When VLANs are used, there is segmentation at both Layer 2 and Layer 3, and with IP subnets,

there is segmentation at Layer 3. Segmentation is usually done for one or both of the following reasons: To create smaller, less congested subnets To create security borders

In either case, segmentation can affect how you conduct a vulnerability scan. By segmenting critical assets and resources from less critical systems, you can restrict the scan to the segments of interest, reducing the time to conduct a scan while reducing the amount of irrelevant data. This is not to suggest that you should not scan the less critical parts of the network; it’s just that you can adopt a less robust schedule for those scans. Intrusion Prevention System (IPS), Intrusion Detection System (IDS), and Firewall Settings The settings that exist on the security devices will impact the scan and in many cases are the source of a technical restraint, as mentioned earlier. Scans might be restricted by firewall settings and the scan can cause alerts to be generated by your intrusion devices. Let’s talk a bit more about these devices. Vulnerability scanners are not the only tools used to identify vulnerabilities. The following systems should also be implemented as a part of a comprehensive solution. IDS/IPS

While you can use packet analyzers to manually monitor the network for issues during environmental reconnaissance, a less labor-intensive and more efficient way to detect issues is through the use of intrusion detection systems (IDSs) and intrusion prevention systems (IPSs). An IDS is responsible for detecting unauthorized access or attacks against systems and networks. It can verify, itemize, and characterize threats from

outside and inside the network. Most IDSs are programmed to react in certain ways in specific situations. Event notification and alerts are crucial to an IDS. They inform administrators and security professionals when and where attacks are detected. IDS implementations are furthered divided into the following categories:

Signature based: This type of IDS analyzes traffic and compares it to attack or state patterns, called signatures, that reside within the IDS database. An IDS is also referred to as a misuse-detection system. Although this type of IDS is very popular, it can only recognize attacks as compared with its database and is only as effective as the signatures provided. Frequent database updates are necessary. There are two main types of signature-based IDSs: Pattern matching: The IDS compares traffic to a database of attack patterns. The IDS carries out specific steps when it detects traffic that matches an attack pattern. Stateful matching: The IDS records the initial operating system state. Any changes to the system state that specifically violate the defined rules result in an alert or notification being sent. Anomaly-based: This type of IDS analyzes traffic and compares it to normal traffic to determine whether said traffic is a threat. It is also referred to as a behavior-based, or profile-based, system. The problem with this type of system is that any traffic outside expected norms is reported, resulting in more false positives than you see with signature-based systems. There are three main types of anomaly-based IDSs: Statistical anomaly-based: The IDS samples the live environment to record activities. The longer the IDS is in operation, the more accurate the profile that is built. However, developing a profile that does not have a large number of false positives can be difficult and time-consuming. Thresholds for activity deviations are important in this IDS. Too low a threshold

results in false positives, whereas too high a threshold results in false negatives. Protocol anomaly-based: The IDS has knowledge of the protocols it will monitor. A profile of normal usage is built and compared to activity. Traffic anomaly-based: The IDS tracks traffic pattern changes. All future traffic patterns are compared to the sample. Changing the threshold reduces the number of false positives or negatives. This type of filter is excellent for detecting unknown attacks, but user activity might not be static enough to effectively implement this system. Rule or heuristic based: This type of IDS is an expert system that uses a knowledge base, an inference engine, and rule-based programming. The knowledge is configured as rules. The data and traffic are analyzed, and the rules are applied to the analyzed traffic. The inference engine uses its intelligent software to “learn.” When characteristics of an attack are met, they trigger alerts or notifications. This is often referred to as an IF/THEN, or expert, system.

An application-based IDS is a specialized IDS that analyzes transaction log files for a single application. This type of IDS is usually provided as part of an application or can be purchased as an add-on. An IPS is a system responsible for preventing attacks. When an attack begins, an IPS takes actions to contain the attack. An IPS, like an IDS, can be network or host based. Although an IPS can be signature or anomaly based, it can also use a rate-based metric that analyzes the volume of traffic as well as the type of traffic. In most cases, implementing an IPS is more costly than implementing an IDS because of the added security needed to contain attacks compared to the security needed to simply detect attacks. In addition, running an IPS is more of an overall performance load than running an IDS. HIDS/NIDS

The most common way to classify an IDS is based on its information source: network based or host based. The most common IDS, the network-based IDS (NIDS), monitors network traffic on a local network segment. To monitor traffic on the network segment, the network interface card (NIC) must be operating in promiscuous mode—a mode in which the NIC process all traffic and not just the traffic directed to the host. A NIDS can only monitor the network traffic. It cannot monitor any internal activity that occurs within a system, such as an attack against a system that is carried out by logging on to the system’s local terminal. A NIDS is affected by a switched network because generally a NIDS monitors only a single network segment. A host-based IDS (HIDS) is an IDS that is installed on a single host and protects only that host. Firewall The network device that perhaps is most connected with the idea of security is the firewall. Firewalls can be software programs that are installed over server operating systems, or they can be appliances that have their own operating system. In either case, the job of firewalls is to inspect and control the type of traffic allowed. Firewalls can be discussed on the basis of their type and their architecture. They can also be physical devices or exist in a virtualized environment. The following sections look at them from all angles. Firewall Types

When we discuss types of firewalls, we are focusing on the differences in the way they operate. Some firewalls make a more thorough inspection of traffic than others. Usually there is trade-off in the performance of the firewall and the type of inspection it performs. A deep inspection of the contents of each packet results in the firewall having a detrimental effect on throughput, whereas a more cursory look at each packet has somewhat less of an impact on performance. It is therefore

important to carefully select what traffic to inspect, keeping this trade-off in mind. Packet-filtering firewalls are the least detrimental to throughput because they inspect only the header of a packet for allowed IP addresses or port numbers. Although even performing this function slows traffic, it involves only looking at the beginning of the packet and making a quick allow or disallow decision. Although packet-filtering firewalls serve an important function, they cannot prevent many attack types. They cannot prevent IP spoofing, attacks that are specific to an application, attacks that depend on packet fragmentation, or attacks that take advantage of the TCP handshake. More advanced inspection firewall types are required to stop these attacks. Stateful firewalls are aware of the proper functioning of the TCP handshake, keep track of the state of all connections with respect to this process, and can recognize when packets that are trying to enter the network don’t make sense in the context of the TCP handshake. For example, a packet should never arrive at a firewall for delivery and have both the SYN flag and the ACK flag set unless it is part of an existing handshake process, and it should be in response to a packet sent from inside the network with the SYN flag set. This is the type of packet that the stateful firewall would disallow. A stateful firewall also has the ability to recognize other attack types that attempt to misuse this process. It does this by maintaining a state table about all current connections and the status of each connection process. This allows it to recognize any traffic that doesn’t make sense with the current state of the connection. Of course, maintaining this table and referencing it cause this firewall type to have more effect on performance than does a packet-filtering firewall. Proxy firewalls actually stand between each connection from the outside to the inside and make the connection on behalf of the

endpoints. Therefore, there is no direct connection. The proxy firewall acts as a relay between the two endpoints. Proxy firewalls can operate at two different layers of the OSI model. Circuit-level proxies operate at the session layer (Layer 5) of the OSI model. They make decisions based on the protocol header and session layer information. Because they do not do deep packet inspection (at Layer 7, the application layer), they are considered application independent and can be used for wide ranges of Layer 7 protocol types. A SOCKS firewall is an example of a circuit-level proxy firewall. It requires a SOCKS client on the computers. Many vendors have integrated their software with SOCKS to make using this type of firewall easier. Application-level proxies perform deep packet inspection. This type of firewall understands the details of the communication process at Layer 7 for the application of interest. An applicationlevel firewall maintains a different proxy function for each protocol. For example, for HTTP, the proxy can read and filter traffic based on specific HTTP commands. Operating at this layer requires each packet to be completely opened and closed, so this type of firewall has the greatest impact on performance. Dynamic packet filtering does not describe a type of firewall; rather, it describes functionality that a firewall might or might not possess. When an internal computer attempts to establish a session with a remote computer, it places both a source and destination port number in the packet. For example, if the computer is making a request of a web server, because HTTP uses port 80, the destination is port 80. The source computer selects the source port at random from the numbers available above the well-known port numbers (that is, above 1023). Because predicting what that random number will be is impossible, creating a firewall rule that anticipates and allows traffic back through the firewall on that random port is impossible.

A dynamic packet-filtering firewall keeps track of that source port and dynamically adds a rule to the list to allow return traffic to that port. A kernel proxy firewall is an example of a fifth-generation firewall. It inspects a packet at every layer of the OSI model but does not introduce the same performance hit as an applicationlevel firewall because it does this at the kernel layer. It also follows the proxy model in that it stands between the two systems and creates connections on their behalf. Firewall Architecture

Whereas the type of firewall speaks to the internal operation of the firewall, the architecture refers to the way in which the firewall or firewalls are deployed in the network to form a system of protection. This section looks at the various ways firewalls can be deployed and the names of these various configurations. A bastion host might or might not be a firewall. The term actually refers to the position of any device. If it is exposed directly to the Internet or to any untrusted network, it is called a bastion host. Whether it is a firewall, a DNS server, or a web server, all standard hardening procedures become even more important for these exposed devices. Any unnecessary services should be stopped, all unneeded ports should be closed, and all security patches must be up to date. These procedures are referred to as “reducing the attack surface.” A dual-homed firewall is a firewall that has two network interfaces: one pointing to the internal network and another connected to the untrusted network. In many cases, routing between these interfaces is turned off. The firewall software allows or denies traffic between the two interfaces, based on the firewall rules configured by the administrator. The danger of relying on a single dual-homed firewall is that it provides a

single point of failure. If this device is compromised, the network is compromised also. If it suffers a denial-of-service (DoS) attack, no traffic can pass. Neither of these is a good situation. In some cases, a firewall may be multihomed. One popular type is the three-legged firewall. This configuration has three interfaces: one connected to the untrusted network, one to the internal network, and the last one to a part of the network called a demilitarized zone (DMZ). A DMZ is a portion of the network where systems will be accessed regularly from an untrusted network. These might be web servers or an e-mail server, for example. The firewall can be configured to control the traffic that flows between the three networks, but it is important to be somewhat careful with traffic destined for the DMZ and to treat traffic to the internal network with much more suspicion. Although the firewalls discussed thus far typically connect directly to an untrusted network (at least one interface does), a screened host is a firewall that is between the final router and the internal network. When traffic comes into the router and is forwarded to the firewall, it is inspected before going into the internal network. A screened subnet takes this concept a step further. In this case, two firewalls are used, and traffic must be inspected at both firewalls to enter the internal network. It is called a screen subnet because there is a subnet between the two firewalls that can act as a DMZ for resources from the outside world. In the real world, these various firewall approaches are mixed and matched to meet requirements, so you might find elements of all these architectural concepts applied to a specific situation.

INHIBITORS TO REMEDIATION

In some cases, there may be issues that make implementing a particular solution inadvisable or impossible. Some of these inhibitors to remediation are as follows: Memorandum of understanding (MOU): An MOU is a document that, while not legally binding, indicates a general agreement between the principals to do something together. An organization may have MOUs with multiple organizations, and MOUs may in some instances contain security requirements that inhibit or prevent the deployment of certain measures. Service-level agreement (SLA): An SLA is a document that specifies a service to be provided by a party, the costs of the service, and the expectations of performance. These contracts may exist with third parties from outside the organization and between departments within an organization. Sometimes these SLAs may include specifications that inhibit or prevent the deployment of certain measures. Organizational governance: Organizational governance refers to the process of controlling an organization’s activities, processes, and operations. When the process is unwieldy, as it is in some very large organizations, the application of countermeasures may be frustratingly slow. One of the reasons for including upper management in the entire process is to use the weight of authority to cut through the red tape. Business process interruption: The deployment of mitigations cannot be done in such a way that business operations and processes are interrupted. Therefore, the need to conduct these activities during off-hours can also be a factor that impedes the remediation of vulnerabilities. Degrading functionality: Some solutions create more issues than they resolve. In some cases, it may impossible to implement mitigation because it would break mission-critical applications or processes. The organization may need to research an alternative solution. Legacy systems: Legacy systems are those that are older and may be less secure than newer systems. Some of these older system are no longer supported and are not receiving updates. In many cases, organizations have legacy systems performing critical operations and the enterprise cannot upgrade those systems for one reason or another. It could be that the current system cannot be

upgraded because it would be disruptive to sales or marketing. Sometimes politics prevents these upgrades. In some cases the money is just not there for the upgrade. For whatever reason, the inability to upgrade is an inhibitor to remediation. Proprietary systems: In some cases, solutions have been developed by the organization that do not follow standards and are proprietary in nature. In this case the organization is responsible for updating the systems to address security issues. Many times this does not occur. For these types of systems, the upgrade path is even more difficult because performing the upgrade is not simply a matter of paying for the upgrade and applying the upgrade. The work must be done by the programmers in the organization that developed the solution (if they are still around). Obviously the inability to upgrade is an inhibitor to remediation.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep practice test software.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 3-4 lists a reference of these key topics and the page numbers on which each is found.

Table 3-4 Key Topics in Chapter 3

Key Topic Element

Bulleted

Description

Page Number

Goals of the vulnerability assessment

4

list

1

Bulleted list

Assigning a level of criticality to a particular data set

4 3

Bulleted list

Active vs. passive scanning

4 3

Bulleted list

Scanner result types

4 4

Step list

Patch management life cycle

4 6

Bulleted list

System hardening examples

4 6

Bulleted list

Methods used to handle risk

4 7

Bulleted list

Threat mitigation validation steps

4 8

Bulleted list

Scanning risks

4 9

Bulleted list

Benefits of credentialed scans

5 1

Table 3-2

Server-Based vs. Agent-Based Scanning

5 2

Bulleted list

Security legislation

5 5

Table 3-3

High-Level Overview of PCI DSS

5 6

Bulleted list

Categories of IDSs

5 7

DEFINE KEY TERMS asset criticality passive vulnerability scanning active vulnerability scanning enumeration true positive false positive true negative false negative configuration baseline patching hardening compensating controls risk acceptance vulnerability feed scope credentialed scan non-credentialed scan external scan internal scan memorandum of understanding (MOU) service-level agreement (SLA) legacy systems proprietary systems

REVIEW QUESTIONS 1. ____________________ describes the relative value of an asset to the organization.

2. List at least one question that should be raised to determine asset criticality. 3. Nessus Network Monitor is an example of a(n) _____________ scanner. 4. Match the following terms with their definition.

Terms

Definitions

False positive

Occurs when the scanner does not identify a vulnerability that exists.

True positive

Occurs when the scanner correctly determines that a vulnerability does not exist.

False negativ e

Occurs when the scanner correctly identifies a vulnerability.

True negativ e

Occurs when the scanner identifies a vulnerability that does not exist.

5. ____________________ are security settings that are required on devices of various types. 6. Place the following patch management life cycle steps in order. Install the patches in the live environment. Determine the priority of the patches and schedule the patches for deployment. Ensure that the patches work properly. Test the patches.

7. When you are encrypting sensitive data, you are implementing a(n) _________________. 8. List at least two logical hardening techniques. 9. Match the following risk-handling techniques with their definitions.

Method

Definition

Risk transfe r

Understanding and accepting the level of risk as well as the cost of damages that can occur

Risk mitigat ion

Terminating the activity that causes a risk or choosing an alternative that is not as risky

Risk avoida nce

Passing on the risk to a third party, such as an insurance company

Risk accept ance

Defining the acceptable risk level the organization can tolerate and reducing the risk to that level

10. List at least one risk to scanning.

Chapter 4

Analyzing Assessment Output This chapter covers the following topics related to Objective 1.4 (Given a scenario, analyze the output from common vulnerability assessment tools) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Web application scanner: Covers the OWASP Zed Attack Proxy (ZAP), Burp Suite, Nikto, and Arachni scanners. Infrastructure vulnerability scanner: Covers the Nessus, OpenVAS, and Qualys scanners. Software assessment tools and techniques: Explains static analysis, dynamic analysis, reverse engineering, and fuzzing. Enumeration: Describes Nmap, hping, active vs. passive enumeration, and Responder. Wireless assessment tools: Covers Aircrack-ng, Reaver, and oclHashcat. Cloud infrastructure assessment tools: Covers ScoutSuite, Prowler, and Pacu.

When assessments are performed there will be data that is gathered that must be analyzed. The format of the output generated by the various tools used to perform the vulnerability assessment may be intuitive, but in many cases it is not. Analysts must be able to read and correctly interpret the output to identify issues that may exist. This chapter is dedicated to analyzing vulnerability assessment output.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these six self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 41 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 4-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Web Application Scanner

1

Infrastructure Vulnerability Scanner

2

Software Assessment Tools and Techniques

3

Enumeration

4

Wireless Assessment Tools

5

Cloud Infrastructure Assessment Tools

6

1. Which of the following is a type of proactive monitoring and uses external agents to run scripted transactions against an application? 1. RUM

2. Synthetic transaction monitoring 3. Reverse engineering 4. OWASP

2. Which of the following is an example of a cloud-based vulnerability scanner? 1. OpenVAS 2. Qualys 3. Nikto 4. NESSUS

3. Which step in the software development life cycle (SDLC) follows the design step? 1. Gather requirements 2. Certify/accredit 3. Develop 4. Test/validate

4. Which of the following is the process of discovering and listing information? 1. Escalation 2. Discovery 3. Enumeration 4. Penetration

5. Which of the following is a set of command-line tools you can use to sniff WLAN traffic? 1. hping3 2. Aircrack-ng 3. Qualys 4. Reaver

6. Which of the following is a data collection tool that allows you to use longitudinal survey panels to track and monitor the cloud environment? 1. Prowler 2. ScoutSuite 3. Pacu 4. Mikto

FOUNDATION TOPICS WEB APPLICATION SCANNER Web vulnerability scanners focus on discovering vulnerabilities in web applications. These tools can operate in two ways: synthetic transaction monitoring or real user monitoring. In synthetic transaction monitoring, preformed (synthetic) transactions are performed against the web application in an automated fashion, and the behavior of the application is recorded. In real user monitoring, real user transactions are monitored while the web application is live. Synthetic transaction monitoring, which is a type of proactive monitoring, uses external agents to run scripted transactions against a web application. This type of monitoring is often preferred for websites and applications. It provides insight into the application’s availability and performance and warns of any potential issue before users experience any degradation in application behavior. For example, Microsoft’s System Center Operations Manager (SCOM) uses synthetic transactions to monitor databases, websites, and TCP port usage. In contrast, real user monitoring (RUM), which is a type of passive monitoring, captures and analyzes every transaction of

every web application or website user. Unlike synthetic transaction monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork, seeing exactly how users are interacting with the application. Many web application scanners are available. These tools scan an application for common security issues with cookie management, PHP scripts, SQL injections, and other problems. Some examples of these tools are covered in this section. Burp Suite The Burp Suite is a suite of tools, one of which can be used for testing web applications. It can scan an application for vulnerabilities and can also be used to crawl an application (to discover content). This commercial software is available for Windows, Linux, and macOS. It can also be used for exploiting vulnerabilities. For more information, see https://portswigger.net/burp. OWASP Zed Attack Proxy (ZAP) The Open Web Application Security Project (OWASP) produces an interception proxy called OWASP Zed Attack Proxy (ZAP). It performs many of the same functions as Burp, and so it also falls into the exploit category. It can monitor the traffic between a client and a server, crawl the application for content, and perform vulnerability scans. For more information, see https://owasp.org/www-project-zap/. Nikto Nikto is a vulnerability scanner that is dedicated to web servers. It is designed for Linux but can be run in Windows through a Perl interpreter. This tool is not stealthy, but it is a fast scanner. Everything it does is recorded in your logs. It generates a lot of information, much of it normal or

informational. It is a command-line tool that is often run from within a Kali Linux server and preinstalled with more than 300 penetration-testing programs. For more information, see https://tools.kali.org/information-gathering/nikto. Arachni Arachni is a Ruby framework for assessing the security of a web application. It is often used by penetration testers. It is open source, works with all major operating systems (Windows, macOS, and Linux), and is distributed via portable packages that allow for instant deployment. Arachni can be used either at the command line or via the web interface, shown in Figure 4-1.

Figure 4-1 Arachni

INFRASTRUCTURE VULNERABILITY SCANNER An infrastructure vulnerability scanner probes for a variety of security weaknesses, including misconfigurations, out-of-date software, missing patches, and open ports. These solutions can be on premises or cloud based. Infrastructure vulnerability scanners were covered in detail in Chapter 18.

Nessus One of the most widely used vulnerability scanners is Nessus Professional, a proprietary tool developed by Tenable Network Security. It is free of charge for personal use in a nonenterprise environment. By default, Nessus Professional starts by listing at the top of the output the issues found on a host that are rated with the highest severity, as shown in Figure 4-2.

Figure 4-2 Example Nessus Output For the computer scanned in Figure 4-2, you can see that there is one high-severity issue (the default password for a Firebird database located on the host), and there are five medium-level issues, including two SSL certificates that cannot be trusted and a remote desktop man-in-the-middle attack vulnerability. For more information, see https://www.tenable.com/products/nessus. OpenVAS As you might suspect from the name, the OpenVAS tool is open source. It was developed from the Nessus code base and is available as a package for many Linux distributions. The scanner is accompanied with a regularly updated feed of network vulnerability tests (NVT). It uses the Greenbone console, shown in Figure 4-3. For more information, see https://www.openvas.org/.

FIGURE 4-3 OpenVAS

SOFTWARE ASSESSMENT TOOLS AND TECHNIQUES Many organizations create software either for customers or for their own internal use. When software is developed, the earlier in the process security is considered, the less it will cost to secure the software. It is best for software to be secure by design. Secure coding standards are practices that, if followed throughout the software development life cycle (SDLC), help to reduce the attack surface of an application. In Chapter 9, “Software Assurance Best Practices,” you will learn about the SDLC, a set of ordered steps to help ensure that software is developed to enhance both security and functionality. As a quick preview, the SDLC steps are listed here:

Step 1. Plan/initiate project

Step 2. Gather requirements Step 3. Design Step 4. Develop Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Perform change management and configuration management/replacement This section concentrates on Steps 5 and 7, which is where testing of the software occurs. This testing is covered in this chapter because it is a part of vulnerability management. This testing or validation can take many forms. Static Analysis Static code analysis is performed without the code executing. Code review and testing must occur throughout the entire SDLC. Code review and testing must identify bad programming patterns, security misconfigurations, functional bugs, and logic flaws. Code review and testing in the planning and design phases include architecture security reviews and threat modeling. Code review and testing in the development phase include static source code analysis and manual code review and static binary code analysis and manual binary review. Once an application is deployed, code review and testing involve penetration testing, vulnerability scanning, and fuzz testing. Static code review can be done with scanning tools that look for common issues. These tools can use a variety of approaches to find bugs, including the following:

Data flow analysis: This analysis looks at runtime information while the software is in a static state. Control flow graph: A graph of the components and their relationships can be developed and used for testing by focusing on the entry and exit points of each component or module. Taint analysis: This analysis attempts to identify variables that are tainted with user-controllable input. Lexical analysis: This analysis converts source code into tokens of information to abstract the code and make it easier to manipulate for testing purposes.

Code review is the systematic investigation of the code for security and functional problems. It can take many forms, from simple peer review to formal code review. There are two main types of reviews:

Formal review: This is an extremely thorough, line-by-line inspection, usually performed by multiple participants using multiple phases. This is the most time-consuming type of code review but the most effective at finding defects. Lightweight: This type of code review is much more cursory than a formal review. It is usually done as a normal part of the development process. It can happen in several forms: Pair programming: Two coders work side by side, checking one another’s work as they go. Email: Code is emailed around to colleagues for them to review when time permits. Over the shoulder: Coworkers review the code while the author explains his or her reasoning. Tool-assisted: Perhaps the most efficient method, this method uses automated testing tools.

While code review is most typically performed on in-house applications, it may be warranted in other scenarios as well. For

example, say that you are contracting with a third party to develop a web application to process credit cards. Considering the sensitive nature of the application, it would not be unusual for you to request your own code review to assess the security of the product. In many cases, more than one tool should be used in testing an application. For example, an online banking application that has had its source code updated should undergo both penetration testing with accounts of varying privilege levels and a code review of the critical models to ensure that defects there do not exist. Dynamic Analysis Dynamic analysis is testing performed while the software is running. This testing can be performed manually or by using automated testing tools. There are two general approaches to dynamic testing:

Synthetic transaction monitoring: A type of proactive monitoring, often preferred for websites and applications. It provides insight into the application’s availability and performance, warning of any potential issue before users experience any degradation in application behavior. It uses external agents to run scripted transactions against an application. For example, Microsoft’s System Center Operations Manager (SCOM) uses synthetic transactions to monitor databases, websites, and TCP port usage. Real user monitoring (RUM): A type of passive monitoring that captures and analyzes every transaction of every application or website user. Unlike synthetic monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork by analyzing exactly how your users are interacting with the application.

Reverse Engineering

In 1990, the Institute of Electrical and Electronics Engineers (IEEE) defined reverse engineering as “the process of analyzing a subject system to identify the system’s components and their interrelationships, and to create representations of the system in another form or at a higher level of abstraction,” where the “subject system” is the end product of software development. Reverse engineering techniques can be applied in several areas, including the study of the security of in-house software. In Chapter 16, “Applying the Appropriate Incident Response Procedure,” you’ll learn how reverse engineering is applied to the incident response procedure. In Chapter 12, “Implementing Configuration Changes to Existing Controls to Improve Security,” you’ll learn how reverse engineering applies to the malware analysis process. The techniques you will learn about in those chapters can also be used to locate security issues with in-house software. Fuzzing Fuzz testing, or fuzzing, involves injecting invalid or unexpected input (sometimes called faults) into an application to test how the application reacts. It is usually done with a software tool that automates the process. Inputs can include environment variables, keyboard and mouse events, and sequences of API calls. Figure 4-4 shows the logic of the fuzzing process.

Figure 4-4 Fuzz Testing Two types of fuzzing can be used to identify susceptibility to a fault injection attack:

Mutation fuzzing: Involves changing the existing input values (blindly) Generation-based fuzzing: Involves generating the inputs from scratch, based on the specification/format

The following measures can help prevent fault injection attacks: Implement fuzz testing to help identify problems. Adhere to safe coding and project management practices. Deploy application-level firewalls.

ENUMERATION Enumeration is the process of discovering and listing information. Network enumeration is the process of discovering pieces of information that might be helpful in a network attack or compromise. There are several techniques used to perform enumeration and several tools that make the process easier for both testers and attackers. Let’s take a look at these techniques and tools. Nmap While network scanning can be done with more blunt tools, like ping, Nmap is stealthier and may be able to perform its activities without setting off firewalls and IDSs. It is valuable to note that while we are discussing Nmap in the context of network scanning, this tool can be used for many other operations, including performing certain attacks. When used for

scanning, it typically locates the devices, locates the open ports on the devices, and determines the OS on each host. After performing Nmap scans with certain flags set in the scan packets, security analysts (and hackers) can make certain assumptions based on the responses received. These flags are used to control the TCP connection process and so are present only in those packets. Figure 4-5 show a TCP header with the important flags circled. Normally flags are “turned on” as a result of the normal TCP process, but a hacker can craft packets to check the flags he wants to check.

Figure 4-5 TCP Header Figure 4-5 shows these flags, among others:

URG: Urgent pointer field significant ACK: Acknowledgment field significant PSH: Push function RST: Reset the connection SYN: Synchronize sequence numbers FIN: No more data from sender

After performing Nmap scans with certain flags set in the scan packets, security analysts (and hackers) can make certain assumptions based on the responses received.

Nmap exploits weaknesses with three scan types:

Null scan: A Null scan is a series of TCP packets that contain a sequence number of 0 and no set flags. Because the Null scan does not contain any set flags, it can sometimes penetrate firewalls and edge routers that filter incoming packets with particular flags. When such a packet is sent, two responses are possible: No response: The port is open on the target. RST: The port is closed on the target.

Figure 4-6 shows the result of a Null scan using the command nmap -sN. In this case, nmap received no response but was unable to determine whether that was because a firewall was blocking the port or the port was closed on the target. Therefore, it is listed as open|filtered.

Figure 4-6 Null Scan FIN scan: This type of scan sets the FIN bit. When this packet is sent, two responses are possible: No response: The port is open on the target.

RST/ACK: The port is closed on the target.

Example 4-1 shows sample output of a FIN scan using the command nmap -sF, with the -v included for verbose output. Again, nmap received no response but was unable to determine whether that was because a firewall was blocking the port or the port was closed on the target. Therefore, it is listed as open|filtered.

Example 4-1 FIN Scan Using nmap –sF Click here to view code image # nmap -sF -v 192.168.0.7 Starting nmap 3.81 at 2016-01-23 21:17 EDT Initiating FIN Scan against 192.168.0.7 [1663 ports] at 21:17 The FIN Scan took 1.51s to scan 1663 total ports. Host 192.168.0.7 appears to be up ... good. Interesting ports on 192.168.0.7: (The 1654 ports scanned but not shown below are in state: closed) PORT STATE SERVICE 21/tcp open|filtered ftp 22/tcp open|filtered ssh 23/tcp open|filtered telnet 79/tcp open|filtered finger 110/tcp open|filtered pop3 111/tcp open|filtered rpcbind 514/tcp open|filtered shell 886/tcp open|filtered unknown 2049/tcp open|filtered nfs MAC Address: 00:03:47:6D:28:D7 (Intel) Nmap finished: 1 IP address (1 host up) scanned in 2.276 seconds Raw packets sent: 1674 (66.9KB) | Rcvd: 1655 (76.1KB)

XMAS scan: This type of scan sets the FIN, PSH, and URG flags. When this packet is sent, two responses are possible:

No response: The port is open on the target. RST: The port is closed on the target.

Figure 4-7 shows the result of this scan, using the command nmap -sX. In this case nmap received no response but was unable to determine whether that was because a firewall was blocking the port or the port was closed on the target. Therefore, it is listed as open|filtered.

Figure 4-7 XMAS Scan Null, FIN, and XMAS scans all serve the same purpose, to discover open ports and ports blocked by a firewall, and differ only in the switch used. While there are many more scan types and attacks that can be launched with Nmap, these scan types are commonly used during environmental reconnaissance testing to discover what the hacker might discover and take steps to close any gaps in security before the hacker gets there. For more information on Nmap, see https://nmap.org/. Host Scanning

Host scanning involves identifying the live hosts on a network or in a domain namespace. Nmap and other scanning tools (such as ScanLine and SuperScan) can be used for this. Sometimes called a ping scan, a host scan records responses to pings sent to every address in the network. You can also combine a host scan with a port scan by using the proper arguments to the command. During environmental reconnaissance testing, you can make use of these scanners to identify all live hosts. You may discover hosts that shouldn’t be there. To execute this scan from nmap, the command is nmap -sP 192.168.0.0-100, where 0-100 is the range of IP addresses to be scanned in the 192.168.0.0 network. Figure 4-8 shows an example of the output from this command. This command’s output lists all devices that are on. For each one, the MAC address is also listed.

FIGURE 4-8 Host Scan with Nmap hping hping (and the newer version, hping3) is a command-lineoriented TCP/IP packet assembler/analyzer that goes beyond simple ICMP echo requests. It supports TCP, UDP, ICMP, and

RAW-IP protocols and also has a traceroute mode. The following is a subset of the operations possible with hping:

Firewall testing Advanced port scanning Network testing, using different protocols, TOS, fragmentation Manual path MTU discovery Advanced traceroute, under all the supported protocols Remote OS fingerprinting Remote uptime guessing TCP/IP stacks auditing

What is significant about hping is that it can be used to create or assemble packets. Attackers use packet assembly tools to create packets that allow them to mount attacks. Testers can also use hping to create malicious packets to assess the response of the network defenses or to identify vulnerabilities that may exist. A common attack is a DoS attack using what is called a SYN flood. In this attack, the target is overwhelmed with unanswered SYN/ACK packets. The device answers each SYN packet with a SYN-ACK. Since devices reserve memory for the expected response to the SYN-ACK packet, and since the attacker never answers, the target system eventually runs out of memory, making it essentially a dead device. This scenario is shown in Figure 4-9.

FIGURE 4-9 SYN Flood Example 4-2 demonstrates how to deploy a SYN flood by executing the hping command at the terminal. Example 4-2 Deploying a SYN Flood with hping Click here to view code image $ sudo hping3 -i u1 -S -p 80 -c 10 192.168.1.1 HPING 192.168.1.1 (eth0 192.168.1.1): S set, 40 headers + 0 data bytes --- 192.168.1.1 hping statistic --10 packets transmitted, 0 packets received, 100% packet loss round-trip min/avg/max = 0.0/0.0/0.0 ms

The command in Example 4-2 would send TCP SYN packets to 192.168.1.1. Including sudo is necessary because hping3 creates raw packets for the task. For raw sockets/packets, root privilege is necessary on Linux. The parts of the command and the meaning of each are described as follows:

i u1 means wait for 1 microsecond between each packet S indicates SYN flag p 80 means target port 80 c 10 means send 10 packets

Were this a true attack, you would expect to see many more packets sent; however, you can see how this tool can be used to assess the likelihood that such an attack would succeed. For more information, see https://tools.kali.org/informationgathering/hping3. Active vs. Passive Chapter 3, “Vulnerability Management Activities,” covered active and passive scanning. The concept of active and passive enumeration is similar. Active enumeration is when you send packets of some sort to the network and then assess responses. An example of this would be using nmap to send crafted packets that interrogate the accessibility of various ports (port scan). Passive enumeration does not send packets of any type but captures traffic and makes educated assumptions from the traffic. An example is using a packet capture utility (sniffer) to look for malicious traffic on the network. Responder Link-Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBT-NS) are Microsoft Windows components that serve as alternate methods of host identification. Responder is a tool that can be used for a number of things, among them answering NBT and LLMNR name requests. Doing this poisons the service so that the victims communicate with the adversary-controlled system. Once the name system is compromised, Responder captures hashes and credentials that are sent to the system after the name services have been poisoned.

Figure 4-10 shows that after the target was convinced to talk to Responder, it was able to capture the hash sent for authentication, which could then be used to attempt to crack the password.

Figure 4-10 Capturing Authentication Hashes with Responder

WIRELESS ASSESSMENT TOOLS To assess wireless networks for vulnerabilities, you need tools that can use wireless antennas and sensors to capture and examine the wireless traffic. As a security professional tasked with identifying wireless vulnerabilities, you must also be familiar with the tools used to compromise wireless networks. Let’s discuss some of these tools. Aircrack-ng Aircrack-ng is a set of command-line tools you can use to sniff wireless networks, among other things. Installers for this tool are available for both Linux and Windows. It is important to ensure that your device’s wireless chipset and driver support this tool. Aircrack-ng focuses on these areas of Wi-Fi security:

Monitoring: Packet capture and export of data to text files for further processing by third-party tools Attacking: Replay attacks, deauthentication, fake access points, and others via packet injection Testing: Checking Wi-Fi cards and driver capabilities (capture and injection) Cracking: WEP and WPA PSK (WPA1 and 2)

As you can see, capturing wireless traffic is a small part of what this tool can do. The command for capturing is airodump-ng. Figure 4-11 shows Aircrack-ng being used to attempt to crack an encryption key. It attempted 1514 keys before locating the correct one. For more information on Aircrack-ng, see https://www.aircrack-ng.org/.

Figure 4-11 Aircrack-ng Reaver Reaver is both a package of tools and a command-line tool within the package called reaver that is used to attack Wi-Fi

Protected Setup (WPS). Example 4-3 shows the reaver command and its arguments.

Example 4-3 Reaver: Wi-Fi Protected Setup Attack Tool Click here to view code image root@kali:~# reaver -h Reaver v1.6.5 WiFi Protected Setup Attack Tool Copyright (c) 2011, Tactical Network Solutions, Craig Heffner H(M2) One-way Hash For a one-way hash to be effective, creating two different messages with the same hash value must be mathematically impossible. Given a hash value, discovering the original message from which the hash value was obtained must be mathematically impossible. A one-way hash algorithm is collision free if it provides protection against creating the same hash value from different messages. Unlike symmetric and asymmetric algorithms, the hashing algorithm is publicly known. Hash functions are always performed in one direction. Using it in reverse is unnecessary. However, one-way hash functions do have limitations. If an attacker intercepts a message that contains a hash value, the attacker can alter the original message to create a second, invalid message with a new hash value. If the attacker then sends the invalid message to the intended recipient, the intended recipient has no way of knowing that he received an incorrect message. When the receiver performs a hash value calculation, the invalid message looks valid because the invalid message was appended with the attacker’s new hash value, not the original message’s hash value.

To prevent the preceding scenario from occurring, the sender should use a message authentication code (MAC). Encrypting the hash function with a symmetric key algorithm generates a keyed MAC. The symmetric key does not encrypt the original message. It is used only to protect the hash value. Figure 8-30 outlines the basic steps of a hash function.

Figure 8-30 Hash Function Process

Message Digest Algorithm The MD2 message digest algorithm produces a 128-bit hash value. It performs 18 rounds of computations. Although MD2 is still in use today, it is much slower than MD4, MD5, and MD6. The MD4 algorithm also produces a 128-bit hash value. However, it performs only three rounds of computations. Although MD4 is faster than MD2, its use has significantly declined because attacks against it have been so successful. Like the other MD algorithms, the MD5 algorithm produces a 128-bit hash value. It performs four rounds of computations. It was originally created because of the issues with MD4, and it is

more complex than MD4. However, MD5 is not collision free. For this reason, it should not be used for SSL/TLS certificates or digital signatures. The U.S. government requires the usage of SHA-2 instead of MD5. However, in commercial usage, many software vendors publish the MD5 hash value when they release software patches so customers can verify the software’s integrity after download. The MD6 algorithm produces a variable hash value, performing a variable number of computations. Although it was originally introduced as a candidate for SHA-3, it was withdrawn because of early issues the algorithm had with differential attacks. MD6 has since been re-released with this issue fixed. However, that release was too late to be accepted as the NIST SHA-3 standard. Secure Hash Algorithm Secure Hash Algorithm (SHA) is a family of four algorithms published by NIST. SHA-0, originally referred to as simply SHA because there were no other “family members,” produces a 160bit hash value after performing 80 rounds of computations on 512-bit blocks. SHA-0 was never very popular because collisions were discovered. Like SHA-0, SHA-1 produces a 160-bit hash value after performing 80 rounds of computations on 512-bit blocks. SHA1 corrected the flaw in SHA-0 that made it susceptible to attacks. SHA-3, the latest version, is actually a family of hash functions, each of which provides different functional limits. The SHA-3 family is as follows: SHA3-224: Produces a 224-bit hash value after performing 24 rounds of computations on 1152-bit blocks SHA3-256: Produces a 256-bit hash value after performing 24 rounds of computations on 1088-bit blocks

SHA-3-384: Produces a 384-bit hash value after performing 24 rounds of computations on 832-bit blocks SHA3-512: Produces a 512-bit hash value after performing 24 rounds of computations on 576-bit blocks

Keep in mind that SHA-1 and SHA-2 are still widely used today. SHA-3 was not developed because of some security flaw with the two previous standards but was instead proposed as an alternative hash function to the others. Transport Encryption Securing data at rest and data in transit leverages the respective strengths and weaknesses of symmetric and asymmetric algorithms. Applying the two types of algorithms is typically done as shown in Table 8-7.

Table 8-7 Applying Cryptography

Data Type

Da ta at re st

Sy m me tric key

Crypto Type

DES — retir ed AES — revis ed 3DE S Blow fish

Examples

Application

Storing data on hard drives, thumb drives, etc.—any application where the key can easily be shared

Da ta in tra ns it

Asy m me tric key

RSA Diffi eHel man

SSL/TLS key exchange hash

ECC ElGa mal DSA

Transport encryption ensures that data is protected when it is transmitted over a network or the Internet. Transport encryption protects against network sniffing attacks. Security professionals should ensure that their enterprises are protected using transport encryption in addition to protecting data at rest. As an example, think of an enterprise that implements token and biometric authentication for all users, protected administrator accounts, transaction logging, full-disk encryption, server virtualization, port security, firewalls with ACLs, NIPS, and secured access points. None of these solutions provides any protection for data in transport. Transport encryption would be necessary in this environment to protect data. To provide this encryption, secure communication mechanisms should be used, including SSL/TLS, HTTP/HTTPS/SHTTP, SET, SSH, and IPsec. SSL/TLS Secure Sockets Layer (SSL) is a transport-layer protocol that provides encryption, server and client authentication, and message integrity. SSL/TLS was discussed earlier in this chapter.

HTTP/HTTPS/SHTTP Hypertext Transfer Protocol (HTTP) is the protocol used on the Web to transmit website data between a web server and a web client. With each new address that is entered into the web browser, whether from initial user entry or by clicking a link on the page displayed, a new connection is established because HTTP is a stateless protocol. HTTP Secure (HTTPS) is the implementation of HTTP running over the SSL/TLS protocol, which establishes a secure session using the server’s digital certificate. SSL/TLS keeps the session open using a secure channel. HTTPS websites always include the https://designation at the beginning. Although it sounds similar to HTTPS, Secure HTTP (S-HTTP) protects HTTP communication in a different manner. S-HTTP only encrypts a single communication message, not an entire session (or conversation). S-HTTP is not as commonly used as HTTPS. SSH Secure Shell (SSH) is an application and protocol that is used to remotely log in to another computer using a secure tunnel. After the secure channel is established after a session key is exchanged, all communication between the two computers is encrypted over the secure channel. IPsec Internet Protocol Security (IPsec) is a suite of protocols that establishes a secure channel between two devices. IPsec is commonly implemented over VPNs. IPsec was discussed earlier in this chapter.

CERTIFICATE MANAGEMENT

A public key infrastructure (PKI) includes systems, software, and communication protocols that distribute, manage, and control public key cryptography. A PKI publishes digital certificates. Because a PKI establishes trust within an environment, a PKI can certify that a public key is tied to an entity and verify that a public key is valid. Public keys are published through digital certificates. The X.509 standard is a framework that enables authentication between networks and over the Internet. A PKI includes timestamping and certificate revocation to ensure that certificates are managed properly. A PKI provides confidentiality, message integrity, authentication, and nonrepudiation. The structure of a PKI includes certificate authorities, certificates, registration authorities, certificate revocation lists, cross-certification, and the Online Certificate Status Protocol (OCSP). This section discusses these PKI components as well as a few other PKI concepts. Certificate Authority and Registration Authority Any participant that requests a certificate must first go through the registration authority (RA), which verifies the requestor’s identity and registers the requestor. After the identity is verified, the RA passes the request to the certificate authority. A certificate authority (CA) is the entity that creates and signs digital certificates, maintains the certificates, and revokes them when necessary. Every entity that wants to participate in the PKI must contact the CA and request a digital certificate. It is the ultimate authority for the authenticity for every participant in the PKI by signing each digital certificate. The certificate binds the identity of the participant to the public key.

There are different types of CAs. Organizations exist who provide a PKI as a payable service to companies who need them. An example is Verisign. Some organizations implement their own private CAs so that the organization can control all aspects of the PKI process. If an organization is large enough, it might need to provide a structure of CAs, with the root CA being the highest in the hierarchy. Because more than one entity is often involved in the PKI certification process, certification path validation allows the participants to check the legitimacy of the certificates in the certification path. Certificates A digital certificate provides an entity, usually a user, with the credentials to prove its identity and associates that identity with a public key. At minimum, a digital certification must provide the serial number, the issuer, the subject (owner), and the public key. An X.509 certificate complies with the X.509 standard. An X.509 certificate contains the following fields: Version Serial Number Algorithm ID Issuer Validity Subject Subject Public Key Info Public Key Algorithm Subject Public Key Issuer Unique Identifier (optional)

Subject Unique Identifier (optional) Extensions (optional)

Verisign first introduced the following digital certificate classes:

Class 1: Intended for use with email. These certificates get saved by web browsers. Class 2: For organizations that must provide proof of identity. Class 3: For servers and software signing in which independent verification and identity and authority checking is done by the issuing CA.

Certificate Revocation List A certificate revocation list (CRL) is a list of digital certificates that a CA has revoked. To find out whether a digital certificate has been revoked, the browser must either check the CRL or the CA must push out the CRL values to clients. This can become quite daunting when you consider that the CRL contains every certificate that has ever been revoked. One concept to keep in mind is the revocation request grace period. This period is the maximum amount of time between when the revocation request is received by the CA and when the revocation actually occurs. A shorter revocation period provides better security but often results in a higher implementation cost. OCSP The Online Certificate Status Protocol (OCSP) is an Internet protocol that obtains the revocation status of an X.509 digital certificate. OCSP is an alternative to the standard certificate revocation list (CRL) that is used by many PKIs. OCSP automatically validates the certificates and reports back

the status of the digital certificate by accessing the CRL on the CA. PKI Steps The steps involved in requesting a digital certificate are as follow:

1. A user requests a digital certificate, and the RA receives the request. 2. The RA requests identifying information from the requestor. 3. After the required information is received, the RA forwards the certificate request to the CA. 4. The CA creates a digital certificate for the requestor. The requestor’s public key and identity information are included as part of the certificate. 5. The user receives the certificate.

After the user has a certificate, she is ready to communicate with other trusted entities. The process for communication between entities is as follows: 1. User 1 requests User 2’s public key from the certificate repository. 2. The repository sends User 2’s digital certificate to User 1. 3. User 1 verifies the certificate and extracts User 2’s public key. 4. User 1 encrypts the session key with User 2’s public key and sends the encrypted session key and User 1’s certificate to User 2. 5. User 2 receives User 1’s certificate and verifies the certificate with a trusted CA.

After this certificate exchange and verification process occurs, the two entities are able to communicate using encryption. Cross-Certification

Cross-certification establishes trust relationships between CAs so that the participating CAs can rely on the other participants’ digital certificates and public keys. It enables users to validate each other’s certificates when they are actually certified under different certification hierarchies. A CA for one organization can validate digital certificates from another organization’s CA when a cross-certification trust relationship exists. Digital Signatures A digital signature is a hash value encrypted with the sender’s private key. A digital signature provides authentication, nonrepudiation, and integrity. A blind signature is a form of digital signature where the contents of the message are masked before it is signed. Public key cryptography is used to create digital signatures. Users register their public keys with a certificate authority (CA), which distributes a certificate containing the user’s public key and the CA’s digital signature. The digital signature is computed by the user’s public key and validity period being combined with the certificate issuer and digital signature algorithm identifier. The Digital Signature Standard (DSS) is a federal digital security standard that governs the Digital Security Algorithm (DSA). DSA generates a message digest of 160 bits. The U.S. federal government requires the use of DSA, RSA, or Elliptic Curve DSA (ECDSA) and SHA for digital signatures. DSA is slower than RSA and only provides digital signatures. RSA provides digital signatures, encryption, and secure symmetric key distribution. When considering cryptography, keep the following facts in mind: Encryption provides confidentiality. Hashing provides integrity.

Digital signatures provide authentication, non-repudiation, and integrity.

ACTIVE DEFENSE The importance of defense systems in network architecture is emphasized throughout this book. In the context of cybersecurity, the term active defense has more to do with process than architecture. Active defense is achieved by aligning your incident identification and incident response processes such that there is an element of automation built into your reaction to any specific issue. So what does that look like in the real world? One approach among several is called the Active Cyber Defense Cycle, illustrated in Figure 8-31.

Figure 8-31 Active Cyber Defense Cycle While it may not be obvious from the graphic, one of the key characteristics of this approach is that there is an active response to the security issue. This departs from the classic approach of deploying passive defense mechanisms and relying on them to protect assets.

Hunt Teaming Hunt teaming is a new approach to security that is offensive in nature rather than defensive, which has been the common approach of security teams in the past. Hunt teams work together to detect, identify, and understand advanced and determined threat actors. A hunt team is a costly investment on the part of an organization. They target the attackers. To use a bank analogy, when a bank robber compromises a door to rob a bank, defensive measures would say get a better door, while offensive measures (hunt teaming) would say eliminate the bank robber. These cyber guns-for-hire are another tool in the kit. Hunt teaming also refers to a collection of techniques used by security personnel to bypass traditional security technologies to hunt down other attackers who may have used similar techniques to mount attacks that have already been identified, often by other companies. These techniques help in identifying any systems compromised using advanced malware that bypasses traditional security technologies, such as an intrusion detection system/intrusion prevention system (IDS/IPS) or an antivirus application. As part of hunt teaming, security professional could also obtain blacklists from sources like DShield (https://www.dshield.org/). These blacklists would then be compared to existing DNS entries to see if communication was occurring with systems on these blacklists that are known attackers. Hunt teaming can also emulate prior attacks so that security professionals can better understand the enterprise’s existing vulnerabilities and get insight into how to remediate and prevent future incidents.

EXAM PREPARATION TASKS

As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 8-8 lists a reference of these key topics and the page numbers on which each is found.

Table 8-8 Key Topics in Chapter 8

Key Topic Element

Description

Page Number

Bulleted list

Risks when placing resources in a public cloud

1 7 7

Figure 8-2

Network segmentation

1 8 2

Bulleted list

Threats addressed by VLANs

1 8 3

Figure 8-5

Server isolation

1 8 5

Figure 8-6

Logical network diagram

1 8

6 Figure 8-7

Physical network diagram

1 8 7

Bulleted list

Protecting a bastion host

1 8 8

Bulleted list

Deployment options for a bastion host

1 8 8

Bulleted lists

Advantages and disadvantages of a dualhomed firewall

1 8 9

Bulleted lists

Advantages and disadvantages of a three-legged firewall

1 9 0

Bulleted lists

Advantages and disadvantages of a screened host firewall

1 9 1

Bulleted lists

Advantages and disadvantages of a screened subnet

1 9 2

Bulleted list

Network Architecture planes

1 9 3

Bulleted lists

Advantages and disadvantage of SDN

1 9 4

Bulleted list

VPN protocols

1

9 6 Bulleted list

Components of IPsec

1 9 7

Bulleted list

SSL/TLS VPNs

1 9 9

Table 8-2

Advantages and disadvantages of SSL/TLS

2 0 0

Bulleted list

Improvements in TLS 1.3

2 0 0

Bulleted list and paragraph

Security advantages and disadvantages of virtualization

2 0 1

Figure 8-16

Virtualization

2 0 2

Section

Type 1 vs. Type 2 hypervisors

2 0 3

Bulleted list

Virtualization attacks

2 0 3

Bulleted list

Attacks on management interfaces

2 0 5

Figure 8-19

Man-in-the-middle attack

2 0 6

Bulleted list

VDI models

2 0 7

Figure 8-22

Container-based virtualization

2 0 9

Bulleted list

Authentication factors

2 1 2

Bulleted list

Objectives of SSO

2 1 4

Bulleted lists

Advantages and disadvantages of SSO

2 1 5

Bulleted lists

Advantages and disadvantages of Kerberos

2 1 6

Figure 8-23

Kerberos ticket-issuing process

2 1 7

Bulleted list

Federation models

2 1 9

Bulleted list

Security issues with federations

2 1 9

Bulleted list

SPML architecture components

2 2 0

Figure 8-25

SPML process

2 2 1

Bulleted lists

Advantages and disadvantages of OpenID

2 2 2

Numbered list

SAML process

2 2 3

Paragraph

Description of role-based access control (RBAC)

2 2 4

Bulleted list

MAC security modes

2 2 8

Bulleted list

Responsibilities of log management infrastructure administrators

2 3 0

Table 8-3

Examples of logging configuration settings

2 3 1

Numbered list

NIST SP 800-137 steps to establish, implement, and maintain ISCM

2 3 2

Bulleted list

Security services provided by cryptosystems

2 3 3

Table 8-4

Symmetric algorithm strengths and weaknesses

2 3 4

Bulleted list

Advantages of stream-based ciphers

2 3 5

Bulleted list

Advantages of block ciphers

2 3 5

Table 8-5

Symmetric algorithms key facts

2 3 5

Table 8-6

Asymmetric algorithm strengths and weaknesses

2 3 6

Numbered list

Process for hybrid encryption

2 3 7

Figure 8-30

Hash function process

2 3 9

Table 8-7

Applying cryptography

2 4 1

Bulleted list

Digital certificate classes

2 4 4

Numbered list

PKI steps

2 4 5

Figure 8-31

Active Cyber Defense Cycle

2 4 6

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: asset tagging geotagging geofencing radio frequency identification (RFID) segmentation extranet demilitarized zone (DMZ) virtual local-area network (VLAN) jumpbox system isolation air gap bastion host dual-homed firewall multihomed firewall screened host firewall screened subnet control plane data plane management plane virtual storage area network (vSAN) virtual private cloud (VPC) virtual private network (VPN) Point-to-Point Tunneling Protocol (PPTP)

Layer 2 Tunneling Protocol (L2TP) Internet Protocol Security (IPsec) Authentication Header (AH) Encapsulating Security Payload (ESP) Internet Security Association and Key Management Protocol (ISAKMP) Internet Key Exchange (IKE) Secure Sockets Layer/Transport Layer Security (SSL/TLS) change management Type 1 hypervisor Type 2 hypervisor VM escape virtual desktop infrastructure (VDI) containerization multifactor authentication (MFA) knowledge factor authentication ownership factor authentication characteristic factor authentication single sign-on (SSO) Active Directory (AD) Secure European System for Applications in a Multivendor Environment (SESAME) Service Provisioning Markup Language (SPML) Security Assertion Markup Language (SAML) OpenID Shibboleth role-based access control (RBAC) attribute-based access control (ABAC) mandatory access control (MAC) cloud access security broker (CASB) honeypot symmetric algorithms stream-based ciphers

block ciphers asymmetric algorithms Secure Shell (SSH) public key infrastructure (PKI) registration authority (RA) certificate authority (CA) Online Certificate Status Protocol (OCSP) certificate revocation list (CRL) active defense hunt teaming

REVIEW QUESTIONS 1. _____________________ is the process of placing physical identification numbers of some sort on all assets. 2. List at least two examples of segmentation. 3. Match the following terms with their definitions.

Terms

Definitions

Ju mp box

Device with no network connections and all access to the system must be done manually by adding and removing updates and patches with a flash drive or other external device

Sys te m isol ati on

Device exposed directly to the Internet or to any untrusted network

Air gap

Systems isolated from other systems through the control of communications with the device

Bas tio n hos t

Firewall with two network interfaces: one pointing to the internal network and another connected to an untrusted network

Du alho me d fire wal l

A server that is used to access devices that have been placed in a secure network zone such as a DMZ

4. In a(n) _____________________, two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network. 5. List at least one of the network architecture planes. 6. Match the following terms with their definitions.

Ter ms

Definitions

V S A N

Allows external devices to access an internal network by creating a tunnel over the Internet

V P C

Cloud model in which a public cloud provider isolates a specific portion of its public cloud infrastructure to be provisioned for private use

V L A N

Logical segmentation on a switch at Layers 2 and 3

V P N

Software-defined storage method that allows pooling of storage capabilities and instant and automatic provisioning of virtual machine storage

7. ____________________________ handles the creation of a security association for the session and the exchange of keys in IPsec. 8. List at least two advantages of SSL/TLS. 9. Match the following terms with their definitions.

Terms

Definitions

Type 1 hypervis or

Virtualization method that does not use a hypervisor

Containe rization

Hypervisor installed over an operating system

VDI

Hypervisor installed on bare metal

Type 2 hypervis or

Hosting desktop operating systems within a virtual environment in a centralized server

10. ______________________ are authentication factors that rely on something you have in your possession

Chapter 9

Software Assurance Best Practices This chapter covers the following topics related to Objective 2.2 (Explain software assurance best practices) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Platforms: Reviews software platforms, including mobile, web application, client/server, embedded, System-on-Chip (SoC), and firmware. Software development life cycle (SDLC) integration: Explains the formal process specified by the SDLC. DevSecOps: Discusses the DevSecOps framework. Software assessment methods: Covers user acceptance testing, stress test application, security regression testing, and code review. Secure coding best practices: Examines input validation, output encoding, session management, authentication, data protection, and parameterized queries. Static analysis tools: Covers tools and methods for performing static analysis. Dynamic analysis tools: Discusses tools used to test the software as it is running. Formal methods for verification of critical software: Discusses more structured methods of analysis. Service-oriented architecture: Reviews Security Assertions Markup Language (SAML), Simple Object Access Protocol (SOAP), and Representational State Transfer (REST) and introduces microservices.

Many organizations create software either for customers or for their own internal use. When software is developed, the earlier in the process security is considered, the less it will cost to secure the software. It is best for software to be secure by design. Secure coding standards are practices that, if followed throughout the software development life cycle, help reduce the attack surface of an application. Standards are developed through a broad-based community effort for common programming languages. This chapter looks at application security, the type of testing to conduct, and secure coding best practices from several well-known organizations that publish guidance in this area.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 9-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 9-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questio n

Platforms

1

Software Development Life Cycle (SDLC) Integration

2

DevSecOps

3

Software Assessment Methods

4

Secure Coding Best Practices

5

Static Analysis Tools

6

Dynamic Analysis Tools

7

Formal Methods for Verification of Critical Software

8

Service-Oriented Architecture

9

1. Which of the following is software designed to exert a measure of control over mobile devices? 1. IoT 2. BYOD 3. MDM 4. COPE

2. Which of the following is the first step in the SDLC? 1. Design 2. Plan/initiate project 3. Release/maintain 4. Develop

3. Which of the following is not one of the three main actors in traditional DevOps? 1. Operations

2. Security 3. QA 4. Production

4. Which of the following is done to verify functionality after making a change to the software? 1. User acceptance testing 2. Regression testing 3. Fuzz testing 4. Code review

5. Which of the following is done to prevent the inclusion of dangerous character types that might be inserted by malicious individuals? 1. Input validation 2. Blacklisting 3. Output encoding 4. Fuzzing

6. Which form of code review looks at runtime information while the software is in a static state? 1. Lexical analysis 2. Data flow analysis 3. Control flow graph 4. Taint analysis

7. Which of the following uses external agents to run scripted transactions against an application? 1. RUM 2. Synthetic transaction monitoring 3. Fuzzing

4. SCCP

8. Which of the following levels of formal methods would be the most appropriate in high-integrity systems involving safety or security? 1. Level 0 2. Level 1 3. Level 2 4. Level 3

9. Which of the following is a client/server model for interacting with content on remote systems, typically using HTTP? 1. SOAP 2. SAML 3. OpenId 4. REST

FOUNDATION TOPICS PLATFORMS All software must run on an underlying platform that supplies the software with the resources required to perform and connections to the underlying hardware with which it must interact. This section provides an overview of some common platforms for which software is written. Mobile You learned a lot about the issues with mobile as a platform in Chapter 5, “Threats and Vulnerabilities Associated with Specialized Technology.” Let’s look at some additional issues with this platform.

Containerization One of the issues with allowing the use of personal devices in a bring your own device (BYOD) initiative is the possible mixing of sensitive corporate data with the personal data of the user. Containerization is a newer feature of most mobile device management (MDM) software that creates an encrypted “container” to hold and quarantine corporate data separately from that of the user’s data. This allows for MDM policies to be applied only to that container and not the rest of the device. Configuration Profiles and Payloads MDM configuration profiles are used to control the use of devices; when these profiles are applied to the devices, they make changes to settings such as the passcode settings, Wi-Fi passwords, virtual private network (VPN) configurations, and more. Profiles also can restrict items that are available to the user, such as the camera. The individual settings, called payloads, may be organized into categories in some implementations. For example, there may be a payload category for basic settings, such as a required passcode, and other payload categories, such as e-mail settings, Internet, and so on. Personally Owned, Corporate Enabled When a personally owned, corporate-enabled (POCE) policy is in use, the organization’s users purchase their own devices but allow the devices to be managed by corporate tools such as MDM software. Corporate-Owned, Personally Enabled Corporate-owned, personally enabled (COPE) is a strategy in which an organization purchases mobile devices and users manage those devices. Organizations can often monitor and control the users’ activity to a larger degree than with personally owned devices. Besides using these devices for business purposes, employees can use the devices for personal

activities, such as accessing social media sites, using e-mail, and making calls. COPE also gives the company more power in terms of policing and protecting devices. Organizations should create explicit policies that define the allowed and disallowed activities on COPE devices. Application Wrapping Another technique to protect mobile devices and the data they contain is application wrapping. Application wrappers (implemented as policies) enable administrators to set policies that allow employees with mobile devices to safely download an app, typically from an internal store. Policy elements can include elements such as whether user authentication is required for a specific app and whether data associated with the app can be stored on the device. Application, Content, and Data Management In addition to the previously discussed containerization method of securing data and applications, MDM solutions can use other methods as well, such as conditional access, which defines policies that control access to corporate data based on conditions of the connection, including user, location, device state, application sensitivity, and real-time risk. Moreover, these policies can be granular enough to control certain actions within an application, such as preventing cut and paste. Finally, more secure control of sharing is possible, allowing for the control and tracking of what happens after a file has been accessed, with the ability to prevent copying, printing, and other actions that help control sharing with unauthorized users. Remote Wiping Remote wipes are instructions sent remotely to a mobile device that erase all the data, typically used when a device is lost or stolen. In the case of the iPhone, this feature is closely connected to the locater application Find My iPhone. Android

phones do not come with an official remote wipe. You can, however, install an Android app called Lost Android that will do this. Once the app is installed, it works in the same way as the iPhone remote wipe. Android Device Manager provides almost identical functionality to the iPhone. Remote wipe is a function that comes with MDM software, and consent to remote wipe should be required of any user who uses a mobile device in either a BYOD or COPE environment. SCEP Simple Certificate Enrollment Protocol (SCEP) provisions certificates to network devices, including mobile devices. Because SCEP includes no provision for authenticating the identity of the requester, two different authorization mechanisms are used for the initial enrollment:

Manual: The requester is required to wait after submission for the certificate authority (CA) operator or certificate officer to approve the request. Preshared secret: The SCEP server creates a “challenge password” that must be somehow delivered out-of-band to the requester and then included with the submission back to the server.

Security issues with SCEP include the fact that when the preshared secret method is used, the challenge password is used for authorization to submit a certificate request. It is not used for authentication of the device. NIST SP 800-163 Rev 1 NIST SP 800-163 Rev 1, Vetting the Security of Mobile Applications, was written to help organizations do the following:

Understand the process for vetting the security of mobile applications Plan for the implementation of an app vetting process Develop app security requirements Understand the types of app vulnerabilities and the testing methods used to detect those vulnerabilities Determine whether an app is acceptable for deployment on the organization’s mobile devices

To provide software assurance for apps, organizations should develop security requirements that specify, for example, how data used by an app should be secured, the environment in which an app will be deployed, and the acceptable level of risk for an app. To help ensure that an app conforms to such requirements, a process for evaluating the security of apps should be performed. The NIST SP 800-163 Rev 1 process is as follows: 1. Application vetting process: A sequence of activities performed by an organization to determine whether a mobile app conforms to the organization’s app security requirements. This process is shown in Figure 9-1.

FIGURE 9-1 App Vetting Process

2. Application intake process: Begins when an app is received for analysis. This process is typically performed manually by an organization administrator or automatically by an app vetting system. The app intake process has two primary inputs: the app under consideration (required) and additional testing artifacts, such as reports from previous app vetting results (optional). 3. Application testing process: Begins after an app has been registered and preprocessed and is forwarded to one or more test tools. A test tool is a software tool or service that tests an app for the presence of software vulnerabilities. 4. Application approval/rejection process: Begins after a vulnerability and risk report is generated by a test tool and made available to one or more security analysts. A security analyst (or analysts) inspects vulnerability reports and risk assessments from one or more test tools to ensure that an app meets all general app security requirements. 5. Results submission process: Begins after the final app approval/rejection report is finalized by the authorizing official and artifacts are prepared for submission to the requesting source. 6. App re-vetting process: From the perspective of a security analyst, updates and threats they are designed to meet can force the evaluation of updated apps to be treated as wholly new pieces of software. Depending on the risk tolerance of an organization, this can make the re-vetting of mobile apps critical for certain apps.

Web Application Despite all efforts to design a secure web architecture, attacks against web-based systems still occur and still succeed. This section examines some of the more common types of attacks, including maintenance hooks, time-of-check/time-of-use attacks, and web-based attacks. Maintenance Hooks From the perspective of software development, a maintenance hook is a set of instructions built into the code that allows someone who knows about the so-called backdoor to use the instructions to connect to view and edit the code without using the normal access controls. In many cases maintenance

hooks are placed there to make it easier for the vendor to provide support to the customer. In other cases they are placed there to assist in testing and tracking the activities of the product and are never removed later. Note The term maintenance account is often confused with maintenance hook. A maintenance account is a backdoor account created by programmers to give someone full permissions in a particular application or operating system. A maintenance account can usually be deleted or disabled easily, but a true maintenance hook is often a hidden part of the programming and much harder to disable. Both of these can cause security issues because many attackers try the documented maintenance hooks and maintenance accounts first. You would be surprised at the number of computers attacked on a daily basis because these two security issues are left unaddressed.

Regardless of how the maintenance hooks got into the code, they can present a major security issue if they become known to hackers who can use them to access the system. Countermeasures on the part of the customer to mitigate the danger are as follows:

Use a host-based IDS to record any attempt to access the system using one of these hooks. Encrypt all sensitive information contained in the system. Implement auditing to supplement the IDS.

The best solution is for the vendor to remove all maintenance hooks before the product goes into production. Code reviews should be performed to identify and remove these hooks. Time-of-Check/Time-of-Use Attacks Time-of-check/time-of-use attacks attempt to take advantage of the sequence of events that occurs as the system completes common tasks. It relies on knowledge of the dependencies present when a specific series of events occurs in

multiprocessing systems. By attempting to insert himself between events and introduce changes, the hacker can gain control of the result. A term often used as a synonym for a timeof-check/time-of-use attack is race condition, which is actually a different attack. In this attack, the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome. Countermeasures to these attacks are to make critical sets of instructions atomic, which means that they either execute in order and in entirety or the changes they make are rolled back or prevented. It is also best for the system to lock access to certain items it uses or touches when carrying out these sets of instructions. Cross-Site Request Forgery (CSRF) Chapter 7, “Implementing Controls to Mitigate Attacks and Software Vulnerabilities,” described cross-site scripting (XSS) attacks. A similar attack is the cross-site request forgery (CSRF), which causes an end user to execute unwanted actions on a web application in which she is currently authenticated. Unlike with XSS, in CSRF, the attacker exploits the website’s trust of the browser rather than the other way around. The website thinks that the request came from the user’s browser and was actually made by the user. However, the request was planted in the user’s browser. It usually gets there when a user follows a URL that already contains the code to be injected. This type of attack is shown in Figure 9-2.

Figure 9-2 CSRF The following measures help prevent CSRF vulnerabilities in web applications:

Using techniques such as URLEncode and HTMLEncode, encode all output based on input parameters for special characters to prevent malicious scripts from executing. Filter input parameters based on special characters (those that enable malicious scripts to execute). Filter output based on input parameters for special characters.

Click-Jacking A hacker using a click-jacking attack crafts a transparent page or frame over a legitimate-looking page that entices the user to click something. When the user does click, he is really clicking on a different URL. In many cases, the site or application may entice the user to enter credentials that could be used later by the attacker. This type of attack is shown in Figure 9-3.

Figure 9-3 Click-jacking Client/Server When a web application is developed, one of the decisions the developers need to make is which information will be processed on the server and which information will be processed on the browser of the client. Figure 9-4 shows client-side processing, and Figure 9-5 shows server-side processing. Many web designers like processing to occur on the client side, which taxes the web server less and enables it to serve more users. Others shudder at the idea of sending to the client all the processing code—and possibly information that could be useful in attacking the server. Modern web development should be concerned with finding the right balance between server-side and client-side implementation. In some cases performance might outweigh security or vice versa.

Figure 9-4 Client-Side Processing

Figure 9-5 Server-Side Processing Embedded An embedded system is a computer system with a dedicated function within a larger system, often with real-time computing constraints. It is embedded as part of the device, often including hardware and mechanical parts. Embedded systems control many devices in common use today and include systems embedded in cars, HVAC systems, security alarms, and even lighting systems. Machine-to-machine (M2M) communication, the Internet of Things (IoT), and remotely controlled industrial systems have increased the number of connected devices and simultaneously made these devices targets. Because embedded systems are usually placed within another device without input from a security professional, security is not even built into the device. So while allowing the device to communicate over the Internet with a diagnostic system provides a great service to the consumer, oftentimes the

manufacturer has not considered that a hacker can then reverse communication and take over the device with the embedded system. As of this writing, reports have surfaced of individuals being able to take control of vehicles using their embedded systems. Manufacturers have released patches that address such issues, but not all vehicle owners have applied or even know about the patches. As M2M and IoT increase in popularity, security professionals can expect to see a rise in incidents like this. A security professional is expected to understand the vulnerabilities these systems present and how to put controls in place to reduce an organization’s risk. Hardware/Embedded Device Analysis Hardware/embedded device analysis involves using the tools and firmware provided with devices to determine the actions that were performed on and by the device. The techniques used to analyze the hardware/embedded device vary based on the device. In most cases, the device vendor can provide advice on the best technique to use depending on what information you need. Log analysis, operating system analysis, and memory inspections are some of the general techniques used. Hardware/embedded device analysis is used when mobile devices are analyzed. For performing this type of analysis, NIST makes the following recommendations:

Any analysis should not change the data contained on the device or media. Only competent investigators should access the original data and must explain all actions they took. Audit trails or other records must be created and preserved during all steps of the investigation.

The lead investigator is responsible for ensuring that all these procedures are followed. All activities regarding digital evidence, including its seizure, access to it, its storage, or its transfer, must be documented, preserved, and available for review.

In Chapter 18, “Utilizing Basic Digital Forensics Techniques,” you will learn more about forensics. System-on-Chip (SoC) A System-on-Chip (SoC) is an integrated circuit that includes all components of a computer or another electronic system. SoCs can be built around a microcontroller or a microprocessor (the type found in mobile phones). Specialized SoCs are also designed for specific applications. Secure SoCs provide the key functionalities described in the following sections. Secure Booting Secure booting is a series of authentication processes performed on the hardware and software used in the boot chain. Secure booting starts from a trusted entity (also called the anchor point). The chip hardware booting sequence and BootROM are the trusted entities, and they are fabricated in silicon. Hence, it is next to impossible to change the hardware (trusted entity) and still have a functional SoC. The process of authenticating each successive stage is performed to create a chain of trust, as depicted in Figure 9-6.

Figure 9-6 Secure Boot Central Security Breach Response The security breach response unit monitors security intrusions. In the event that intrusions are reported by hardware detectors (such as voltage, frequency, and temperature monitors), the response unit moves the state of the SoC to nonsecure, which is characterized by certain restrictions that differentiate it from the secure state. Any further security breach reported to the response unit takes the SoC to the fail state (that is, a nonfunctional state). The SoC remains in the fail state until a power-on-reset is issued. See Figure 9-7.

Figure 9-7 Central Security Breach Response Firmware Firmware is software that is stored on an erasable programmable read-only memory (EPROM) or electrically erasable PROM (EEPROM) chip within a device. While updates to firmware may become necessary, they are infrequent. Firmware can exist as the basic input/output system (BIOS) on a computer or device. Hardware devices, such as routers and printers, require some processing power to complete their tasks. This software is also contained in the firmware chips located within the devices. Like with computers, this firmware is often installed on EEPROM to allow it to be updated. Again, security professionals should

ensure that updates are only obtained from the device vendor and that the updates have not been changed in any manner. Firmware updates might be some of the more neglected but important tasks that technicians perform. Many subscribe to the principle “if it ain’t broke, don’t fix it.” The problem with this approach is that in many cases firmware updates are not designed to add functionality or fix something that doesn’t work exactly right; rather, in many cases, they address security issues. Computers contain a lot of firmware, all of which is potentially vulnerable to hacking—everything from USB keyboards and webcams to graphics and sound cards. Even computer batteries have firmware. A simple Google search for “firmware vulnerabilities” turns up pages and pages of results that detail various vulnerabilities too numerous to mention. While it is not important to understand each and every firmware vulnerability, it is important to realize that firmware attacks are on the new frontier, and the only way to protect yourself is to keep up with the updates.

SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC) INTEGRATION The goal of the software development life cycle (SDLC) is to provide a predictable framework of procedures designed to identify all requirements with regard to functionality, cost, reliability, and delivery schedule and ensure that each is met in the final solution. This section breaks down the steps in the SDLC, listed next, and describes how each step contributes to this ultimate goal. Keep in mind that steps in the SDLC can vary based on the provider, and this is but one popular example.

Step 1. Plan/initiate project Step 2. Gather requirements

Step 3. Design Step 4. Develop Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Perform change management and configuration management/replacement Step 1: Plan/Initiate Project In the plan/initiate step of the software development life cycle, the organization decides to initiate a new software development project and formally plans the project. Security professionals should be involved in this phase to determine if information involved in the project requires protection and if the application needs to be safeguarded separately from the data it processes. Security professionals need to analyze the expected results of the new application to determine if the resultant data has a higher value to the organization and, therefore, requires higher protection. Any information that is handled by the application needs a value assigned by its owner, and any special regulatory or compliance requirements need to be documented. For example, healthcare information is regulated by several federal laws and must be protected. The classification of all input and output data of the application needs to be documented, and the appropriate application controls should be documented to ensure that the input and output data are protected. Data transmission must also be analyzed to determine the types of networks used. All data sources must be analyzed as well. Finally, the effect of the application on organizational operations and culture needs to be analyzed.

Step 2: Gather Requirements In the gather requirements step of the software development life cycle, both the functionality and the security requirements of the solution are identified. These requirements could be derived from a variety of sources, such as evaluating competitor products for a commercial product or surveying the needs of users for an internal solution. In some cases, these requirements could come from a direct request from a current customer. From a security perspective, an organization must identify potential vulnerabilities and threats. When this assessment is performed, the intended purpose of the software and the expected environment must be considered. Moreover, the sensitivity of the data that will be generated or handled by the solution must be assessed. Assigning a privacy impact rating to the data to help guide measures intended to protect the data from exposure might be useful. Step 3: Design In the design step of the software development life cycle, an organization develops a detailed description of how the software will satisfy all functional and security goals. It involves mapping the internal behavior and operations of the software to specific requirements to identify any requirements that have not been met prior to implementation and testing. During this process, the state of the application is determined in every phase of its activities. The state of the application refers to its functional and security posture during each operation it performs. Therefore, all possible operations must be identified to ensure that the software never enters an insecure state or acts in an unpredictable way.

Identifying the attack surface is also a part of this analysis. The attack surface describes what is available to be leveraged by an attacker. The amount of attack surface might change at various states of the application, but at no time should the attack surface provided violate the security needs identified in the gather requirements stage. Step 4: Develop The develop step is where the code or instructions that make the software work is written. The emphasis of this phase is strict adherence to secure coding practices. Some models that can help promote secure coding are covered later in this chapter, in the section “Application Security Frameworks.” Many security issues with software are created through insecure coding practices, such as lack of input validation or data type checks. Security professionals must identify these issues in a code review that attempts to assume all possible attack scenarios and their impacts on the code. Not identifying these issues can lead to attacks such as buffer overflows and injection and to other error conditions. Step 5: Test/Validate In the test/validate step, several types of testing should occur, including identifying both functional errors and security issues. The auditing method that assesses the extent of the system testing and identifies specific program logic that has not been tested is called the test data method. This method tests not only expected or valid input but also invalid and unexpected values to assess the behavior of the software in both instances. An active attempt should be made to attack the software, including attempts at buffer overflows and denial-of-service (DoS) attacks. Some types of testing performed at this time are Verification testing: Determines whether the original design specifications have been met

Validation testing: Takes a higher-level view and determines whether the original purpose of the software has been achieved

Step 6: Release/Maintain The release/maintenance step includes the implementation of the software into the live environment and the continued monitoring of its operation. At this point, as the software begins to interface with other elements of the network, finding additional functional and security problems is not unusual. In many cases vulnerabilities are discovered in the live environments for which no current fix or patch exists. This is referred to as a zero-day vulnerability. Ideally, the supporting development staff should discover such vulnerabilities before those looking to exploit them do. Step 7: Certify/Accredit The certification step is the process of evaluating software for its security effectiveness with regard to the customer’s needs. Ratings can certainly be an input to this but are not the only consideration. Accreditation is the formal acceptance of the adequacy of a system’s overall security by management. Provisional accreditation is given for a specific amount of time and lists application, system, or accreditation documentation required changes. Full accreditation grants accreditation without any required changes. Provisional accreditation becomes full accreditation once all the changes are completed, analyzed, and approved by the certifying body. Step 8: Change Management and Configuration Management/Replacement After a solution is deployed in the live environment, additional changes will inevitably need to be made to the software due to security issues. In some cases, the software might be altered to enhance or increase its functionality. In any case, changes must

be handled through a formal change and configuration management process. The purpose of this step is to ensure that all changes to the configuration of and to the source code itself are approved by the proper personnel and are implemented in a safe and logical manner. This process should always ensure continued functionality in the live environment, and changes should be documented fully, including all changes to hardware and software. In some cases, it may be necessary to completely replace applications or systems. While some failures may be fixed with enhancements or changes, a failure may occur that can be solved only by completely replacing the application.

DEVSECOPS DevSecOps is a development concept that grew out of the DevOps approach to software development. Let’s first review DevOps. DevOps Traditionally, three main actors in the software development process—development (Dev), quality assurance (QA), and operations (Ops)—performed their functions separately, or operated in “silos.” Work would go from Dev to QA to Ops, in a linear fashion, as shown in Figure 9-8. This often led to delays, finger-pointing, and multiple iterations through the linear cycle due to an overall lack of cooperation between the units.

FIGURE 9-8 Traditional Development DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives. It encourages the three units to work together through all phases of the development process. Figure 9-9 shows a Venn diagram that represents this idea.

Figure 9-9 DevOps

While DevOps was created to develop a better working relationship between development, QA, and operations, encouraging a sense of shared responsibility for successful functionality, DevSecOps simply endeavors to bring the security group into the tent as well and create a shared sense of responsibility in all three groups with regard to security. As depicted in Figure 9-10, the entire DevSecOps process is wrapped in security, implying that security must be addressed at every development step.

FIGURE 9-10 DevSecOps

SOFTWARE ASSESSMENT METHODS During the testing phase of the SDLC, various different assessment methods can be used. Among them are user acceptance testing, stress testing applications, security regression testing, and code review. The following sections dig into how these assessment methods operate. User Acceptance Testing While it is important to make web applications secure, in some cases security features make an application unusable from the user perspective. User acceptance testing (UAT) is

designed to ensure that this does not occur. Keep the following guidelines in mind when designing user acceptance testing: Perform the testing in an environment that mirrors the live environment. Identify real-world use cases for execution. Select UAT staff from various internal departments.

Stress Test Application While the goal of many types of testing is locating security issues, the goal of stress testing is to determine the workload that the application can withstand. These tests should be performed in a certain way and should always have defined objectives before testing begins. You will find many models for stress testing, but one suggested order of activities is as follows:

Step 1. Identify test objectives in terms of the desired outcomes of the testing activity. Step 2. Identify key scenario(s)—the cases that need to be stress tested (for example, test login, test searching, test checkout). Step 3. Identify the workload that you want to apply (for example, simulate 300 users). Step 4. Identify the metrics you want to collect and what form these metrics will take (for example, time to complete login, time to complete search). Step 5. Create test cases. Define steps for running a single test, as well as your expected results (for example, Step 1: Select a product; Step 2: Add to cart; Step 3: Check out). Step 6. Simulate load by using test tools (for example, attempt 300 sessions).

Step 7. Analyze results. Security Regression Testing Regression testing is done to verify functionality after making a change to the software. Security regression testing is a subset of regression testing that validates that changes have not reduced the security of the application or opened new weaknesses. This testing should be performed by a different group than the group that implemented the change. It can occur in any part of the development process and includes the following types:

Unit regression: This type tests the code as a single unit. Interactions and dependencies are not tested. Partial regression: With this type, new code is made to interact with other parts of older existing code. Complete regression: This type is the final step in regression testing and performs testing on all units.

Code Review Code review is the systematic investigation of the code for security and functional problems. It can take many forms, from simple peer review to formal code review. There are two main types of code review:

Formal review: This is an extremely thorough, line-by-line inspection, usually performed by multiple participants using multiple phases. This is the most time-consuming type of code review but the most effective at finding defects. Lightweight review: This type of code review is much more cursory than a formal review. It is usually done as a normal part of the development process. It can happen in several forms:

Pair programming: Two coders work side by side, checking one another’s work as they go. E-mail review: Code is e-mailed around to colleagues for them to review when time permits. Over the shoulder: Coworkers review the code while the author explains his or her reasoning. Tool-assisted: Using automated testing tools is perhaps the most efficient method.

Security Testing Black-box testing, or zero-knowledge testing: The team is provided with no knowledge regarding the organization’s application. The team can use any means at its disposal to obtain information about the organization’s application. This is also referred to as closed testing. White-box testing: The team goes into the process with a deep understanding of the application or system. Using this knowledge, the team builds test cases to exercise each path, input field, and processing routine. Gray-box testing: The team is provided more information than in black-box testing, while not as much as in white-box testing. Graybox testing has the advantage of being nonintrusive while maintaining the boundary between developer and tester. On the other hand, it may uncover some of the problems that might be discovered with white-box testing.

Table 9-2 compares black-box, gray-box, and white-box testing.

Table 9-2 Comparing Black-Box, Gray-Box, and White-Box Testing

Black Box Internal workings of the

Gray Box Internal workings of the application are

White Box Internal workings of the

application are not known.

somewhat known.

application are fully known.

Also called closed-box, data-driven, or functional testing.

Also called translucent testing, as the tester has partial knowledge.

Also known as clear-box, structural, or code-based testing.

Performed by end users, testers, and developers.

Performed by end users, testers, and developers.

Performed by testers and developers.

Least timeconsuming.

More time-consuming than black-box testing but less so than whitebox testing.

Most exhaustive and timeconsuming.

While code review is most typically performed on in-house applications, it may be warranted in other scenarios as well. For example, say that you are contracting with a third party to develop a web application to process credit cards. Considering the sensitive nature of the application, it would not be unusual for you to request your own code review to assess the security of the product. In many cases, more than one tool should be used in testing an application. For example, an online banking application that has had its source code updated should undergo both penetration testing with accounts of varying privilege levels and a code review of the critical models to ensure that defects do not exist. Code Review Process Code review varies from organization to organization. Fagan inspections are the most formal code reviews that can occur and

should adhere to the following process:

1. Plan 2. Overview 3. Prepare 4. Inspect 5. Rework 6. Follow-up

Most organizations do not strictly adhere to the Fagan inspection process. Each organization should adopt a code review process fitting for its business requirements. The more restrictive the environment, the more formal the code review process should be.

SECURE CODING BEST PRACTICES Earlier this chapter covered software development security best practices. In addition to those best practices, developers should follow the secure coding best practices covered in the following sections. Input Validation Many attacks arise because a web application has not validated the data entered by the user (or hacker). Input validation is the process of checking all input for issues such as proper format and proper length. In many cases, these validators use either the blacklisting of characters or patterns or the whitelisting of characters or patterns. Blacklisting looks for characters or patterns to block. It can be prone to preventing legitimate requests. Whitelisting looks for allowable characters or patterns and allows only those. Input validation tools fall into several categories:

Cloud-based services Open source tools Proprietary commercial products

Because these tools vary in the amount of skill required, the choice should be made based on the skill sets represented on the cybersecurity team. A fancy tool that no one knows how to use is not an effective tool. Output Encoding Encoding is the process of changing data into another form using code. When this process is applied to output, it is done to prevent the inclusion of dangerous character types that might be inserted by malicious individuals. When processing untrusted user input for (web) applications, filter the input and encode the output. That is the most widely given advice to prevent (server-side) injections. Some common types of output encoding include the following:

URL encoding: A method to encode information in a Uniform Resource Identifier. There’s a set of reserved characters, which have special meaning, and a set of unreserved, or safe characters, which are safe to use. If a character is reserved, then the character is encoded using the percent (%) sign, followed by its hexadecimal digits. Unicode: A standard for encoding, representing and handling characters in most (if not all) languages. Best known is the UTF-8 character encoding standard, which is a variable-length encoding (1, 2, 3, or 4 units of 8 bits, hence the name UTF-8).

Session Management Session management ensures that any instance of identification and authentication to a resource is managed properly. This includes managing desktop sessions and remote sessions.

Desktop sessions should be managed through a variety of mechanisms. Screensavers allow computers to be locked if left idle for a certain period of time. To reactivate a computer, the user must log back in. Screensavers are a timeout mechanism, and other timeout features may also be used, such as shutting down or placing a computer in hibernation after a certain period. Session or logon limitations allow organizations to configure how many concurrent sessions a user can have. Schedule limitations allow organizations to configure the time during which a user can access a computer. Remote sessions usually incorporate some of the same mechanisms as desktop sessions. However, remote sessions do not occur at the computer itself. Rather, they are carried out over a network connection. Remote sessions should always use secure connection protocols. In addition, if users will be remotely connecting only from certain computers, the organization may want to implement some type of rule-based access control that allows only certain connections. Authentication If you have no authentication, you have no security and no accountability. This section covers some authentication topics. Context-based Authentication Context-based authentication is a form of authentication that takes multiple factors or attributes into consideration before authenticating and authorizing an entity. So rather than simply rely on the presentation of proper credentials, the system looks at other factors when making the access decision, such as time of day or location of the subject. Context-based security solves many issues suffered by non-context-based systems. The following are some of the key solutions it provides: Helps prevent account takeovers made possible by simple password systems

Helps prevent many attacks made possible by the increasing use of personal mobile devices Helps prevent many attacks made possible by the user’s location

Context-based systems can take a number of factors into consideration when a user requests access to a resource. In combination, these attributes can create a complex set of security rules that can help prevent vulnerabilities that password systems may be powerless to detect or stop. The following sections look at some of these attributes. Time

Cybersecurity professionals have for quite some time been able to prevent access to a network entirely by configuring login hours in a user’s account profile. However, they have not been able to prevent access to individual resources on a time-of-day basis until recently. For example, you might want to allow Joe to access the sensitive Sales folder during the hours of 9 a.m. to 5 p.m. but deny him access to that folder during other hours. Or you might configure the system so that when Joe accesses resources after certain hours, he is required to give another password or credential (a process often called step-up authentication) or perhaps even have a text code sent to his email address that must be provided to allow this access. Location

At one time, cybersecurity professionals knew that all the network users were safely in the office and behind a secure perimeter created and defended with every tool possible. That is no longer the case. Users now access your network from home, wireless hotspots, hotel rooms, and all sorts of other locations that are less than secure. When you design authentication, you can consider the physical location of the source of an access request. A scenario for this might be that Alice is allowed to

access the Sales folder at any time from the office, but only from 9 a.m. to 5 p.m. from her home and never from elsewhere. Authentication systems can also use location to identify requests to authenticate and access a resource from two different locations in a very short amount of time, one of which could be fraudulent. Finally, these systems can sometimes make real-time assessments of threat levels in the region where a request originates. Frequency

A context-based system can make access decisions based on the frequency with which the requests are made. Because multiple requests to log in coming very quickly can indicate a passwordcracking attack, the system can use this information to deny access. It also can indicate that an automated process or malware, rather than an individual, is attempting this operation. Behavioral

It is possible for authentication systems to track the behavior of an individual over time and use this information to detect when an entity is performing actions that, while within the rights of the entity, differ from the normal activity of the entity. This could be an indication that the account has been compromised. The real strength of an authentication system lies in the way you can combine the attributes just discussed to create very granular policies such as the following: Gene can access the Sales folder from 9 a.m. to 5 p.m. if he is in the office and is using his desktop device, but can access the folder only from 10 a.m. to 3 p.m. if he is using his smartphone in the office, and cannot access the folder at all from 9 a.m. to 5 p.m. if he is outside the office.

The main security issue is that the complexity of the rule creation can lead to mistakes that actually reduce security. A complete understanding of the system is required, and special training should be provided to anyone managing the system. Other security issues include privacy issues, such as user concerns about the potential misuse of information used to make contextual decisions. These concerns can usually be addressed through proper training about the power of contextbased security. Network Authentication Methods One of the protocol choices that must be made in creating a remote access solution is the authentication protocol. The following are some of the most important of those protocols:

Password Authentication Protocol (PAP): PAP provides authentication, but the credentials are sent in cleartext and can be read with a sniffer. Challenge Handshake Authentication Protocol (CHAP): CHAP solves the cleartext problem by operating without sending the credentials across the link. The server sends the client a set of random text called a challenge. The client encrypts the text with the password and sends it back. The server then decrypts it with the same password and compares the result with what was sent originally. If the results match, the server can be assured that the user or system possesses the correct password without ever needing to send it across the untrusted network. Microsoft has created its own variant of CHAP: MS-CHAP v1: This is the first version of a variant of CHAP by Microsoft. This protocol works only with Microsoft devices, and while it stores the password more securely than CHAP, like any other password-based system, it is susceptible to brute-force and dictionary attacks. MS-CHAP v2: This update to MS-CHAP provides stronger encryption keys and mutual authentication, and it uses different

keys for sending and receiving. Extensible Authentication Protocol (EAP): EAP is not a single protocol but a framework for port-based access control that uses the same three components that are used in RADIUS. A wide variety of EAP implementations can use all sorts of authentication mechanisms, including certificates, a public key infrastructure (PKI), and even simple passwords: EAP-MD5-CHAP: This variant of EAP uses the CHAP challenge process, but the challenges and responses are sent as EAP messages. It allows the use of passwords. EAP-TLS: This form of EAP requires a PKI because it requires certificates on both the server and clients. It is, however, immune to password-based attacks as it does not use passwords. EAP-TTLS: This form of EAP requires a certificate on the server only. The client uses a password, but the password is sent within a protected EAP message. It is, however, susceptible to password-based attacks.

Table 9-3 compares the authentication protocols described here.

Table 9-3 Authentication Protocols

Protoc ol

Advantage s

Disadvantag es

Guidelines/Not es

PA P

Simplicity

Password sent in cleartext

Do not use

C H AP

No passwords are exchanged

Susceptible to dictionary and brute-force attacks

Ensure complex password s

Widely supported standard

M SC H AP v1

No passwords are exchanged

M SC H AP v2

No passwords are exchanged

Stronger password storage than CHAP

Stronger password storage than CHAP Mutual authentication

Susceptible to dictionary and brute-force attacks

Ensure complex password s

Supported only on Microsoft devices

If possible, use MSCHAP v2 instead

Susceptible to dictionary and brute-force attacks

Ensure complex password s

Supported only on Microsoft devices Not supported on some legacy Microsoft clients

E AP M D5 C H AP

Supports passwordbased authentication

E AP -

The most secure form of EAP; uses certificates on the server and client

Widely supported standard

Susceptible to dictionary and brute-force attacks

Ensure complex password s

Requires a PKI

No known issues

TL S

E AP TT LS

Widely supported standard As secure as EAP-TLS Only requires a certificate on the server Allows passwords on the client

More complex to configure

Susceptible to dictionary and brute-force attacks

Ensure complex password s

More complex to configure

IEEE 802.1X IEEE 802.1X is a standard that defines a framework for centralized port-based authentication. It can be applied to both wireless and wired networks and uses three components:

Supplicant: The user or device requesting access to the network Authenticator: The device through which the supplicant is attempting to access the network Authentication server: The centralized device that performs authentication

The role of the authenticator can be performed by a wide variety of network access devices, including remote-access servers (both dial-up and VPN), switches, and wireless access points. The role of the authentication server can be performed by a Remote Authentication Dial-in User Service (RADIUS) or Terminal Access Controller Access-Control System Plus (TACACS+) server. The authenticator requests credentials from the supplicant and, upon receiving those credentials, relays them to the authentication server, where they are validated.

Upon successful verification, the authenticator is notified to open the port for the supplicant to allow network access. This process is illustrated in Figure 9-11.

Figure 9-11 IEEE 802.1X While RADIUS and TACACS+ perform the same roles, they have different characteristics. These differences must be considered in the choice of a method. Keep in mind also that while RADIUS is a standard, TACACS+ is Cisco proprietary. Table 9-4 compares them.

Table 9-4 RADIUS and TACACs+

RADIUS

Transp ort Protoc ol

TACACS+

Uses UDP, which may result in faster response

Uses TCP, which offers more information for troubleshooting

Confid entialit y

Encrypts only the password in the access-request packet

Encrypts the entire body of the packet but leaves a standard TACACS+ header for troubleshooting

Authe nticati on and Author ization

Combines authentication and authorization

Separates authentication, authorization, and accounting processes

Suppo rted Layer 3 Protoc ols

Does not support any of the following:

Supports all protocols

NetBIOS Frame Protocol Control protocol X.25 PAD connections

Device s

Does not support securing the available commands on routers and switches

Supports securing the available commands on routers and switches

Traffic

Creates less traffic

Creates more traffic

Biometric Considerations When considering biometric technologies, security professionals should understand the following terms:

Enrollment time: The process of obtaining the sample that is used by the biometric system. This process requires actions that must be repeated several times. Feature extraction: The approach to obtaining biometric information from a collected sample of a user’s physiological or behavioral characteristics. Accuracy: The most important characteristic of biometric systems, it is how correct the overall readings will be. Throughput rate: The rate at which the biometric system can scan characteristics and complete the analysis to permit or deny access. The acceptable rate is 6–10 subjects per minute. A single user should be able to complete the process in 5–10 seconds. Acceptability: Describes the likelihood that users will accept and follow the system. False rejection rate (FRR): A measurement of the percentage of valid users that will be falsely rejected by the system. This is called a Type I error. False acceptance rate (FAR): A measurement of the percentage of invalid users that will be falsely accepted by the system. This is called a Type II error. Type II errors are more dangerous than Type I errors. Crossover error rate (CER): The point at which FRR equals FAR. Expressed as a percentage, this is the most important metric.

When analyzing biometric systems, security professionals often refer to a Zephyr chart that illustrates the comparative strengths and weaknesses of biometric systems. However, you should also consider how effective each biometric system is and its level of user acceptance. The following is a list of the more popular biometric methods ranked by effectiveness, with the most effective being first: 1. Iris scan 2. Retina scan 3. Fingerprint 4. Hand print

5. Hand geometry 6. Voice pattern 7. Keystroke pattern 8. Signature dynamics

The following is a list of the more popular biometric methods ranked by user acceptance, with the methods that are ranked more popular by users being first: 1. Voice pattern 2. Keystroke pattern 3. Signature dynamics 4. Hand geometry 5. Hand print 6. Fingerprint 7. Iris scan 8. Retina scan

When considering FAR, FRR, and CER, smaller values are better. FAR errors are more dangerous than FRR errors. Security professionals can use the CER for comparative analysis when helping their organization decide which system to implement. For example, voice print systems usually have higher CERs than iris scans, hand geometry, or fingerprints. Figure 9-12 shows the biometric enrollment and authentication process.

Figure 9-12 Biometric Enrollment and Authentication Process Certificate-Based Authentication The security of an authentication system can be raised significantly if the system is certificate based rather than password or PIN based. A digital certificate provides an entity— usually a user—with the credentials to prove its identity and associates that identity with a public key. At minimum, a digital certificate must provide the serial number, the issuer, the subject (owner), and the public key. Using certificate-based authentication requires the deployment of a public key infrastructure (PKI). PKIs include systems, software, and communication protocols that distribute, manage, and control public key cryptography. A PKI publishes digital certificates. Because a PKI establishes trust within an

environment, a PKI can certify that a public key is tied to an entity and verify that a public key is valid. Public keys are published through digital certificates. In some situations, it may be necessary to trust another organization’s certificates or vice versa. Cross-certification establishes trust relationships between certificate authorities so that the participating CAs can rely on the other participants’ digital certificates and public keys. It enables users to validate each other’s certificates when they are actually certified under different certification hierarchies. A CA for one organization can validate digital certificates from another organization’s CA when a cross-certification trust relationship exists. Data Protection At this point, the criticality of protecting sensitive data transferred by software should be quite clear. Sensitive data in this context includes usernames, passwords, encryption keys, and paths that applications need to function but that would cause harm if discovered. Determining the proper method of securing this information is critical and not easy. In the case of passwords, a generally accepted rule is to not hard-code passwords (although this was not always the case). Instead, passwords should be protected using encryption when they are included in application code. This makes them difficult to change, reverse, or discover. Parameterized Queries There are two types of queries, parameterized and nonparameterized. The difference between the two is that parameterized queries require input values or parameters and nonparameterized queries do not. The most important reason to use parameterized queries is to avoid SQL injection attacks. The following are some guidelines:

Use parameterized queries in ASP.NET and prepared statements in Java to perform escaping of dangerous characters before the SQL statement is passed to the database. To prevent command injection attacks in SQL queries, use parameterized APIs (or manually quote the strings if parameterized APIs are unavailable).

STATIC ANALYSIS TOOLS Static analysis refers to testing or examining software when it is not running. The most common type of static analysis is code review. Code review is the systematic investigation of the code for security and functional problems. It can take many forms, from simple peer review to formal code review. Code review was covered earlier in this chapter. More on static analysis was covered in Chapter 4.

DYNAMIC ANALYSIS TOOLS Dynamic analysis is testing performed on software while it is running. This testing can be performed manually or by using automated testing tools. There are two general approaches to dynamic analysis, which were covered in Chapter 4 but are worth reviewing:

Synthetic transaction monitoring: A type of proactive monitoring, often preferred for websites and applications. It provides insight into the application’s availability and performance, warning of any potential issue before users experience any degradation in application behavior. It uses external agents to run scripted transactions against an application. For example, Microsoft’s System Center Operations Manager (SCOM) uses synthetic transactions to monitor databases, websites, and TCP port usage.

Real user monitoring (RUM): A type of passive monitoring that captures and analyzes every transaction of every application or website user. Unlike synthetic monitoring, which attempts to gain performance insights by regularly testing synthetic interactions, RUM cuts through the guesswork by analyzing exactly how your users are interacting with the application.

FORMAL METHODS FOR VERIFICATION OF CRITICAL SOFTWARE Formal code review is an extremely thorough, line-by-line inspection, usually performed by multiple participants using multiple phases. This is the most time-consuming type of code review but the most effective at finding defects. Formal methods can be used at a number of levels:

Level 0: Formal specification may be undertaken and then a program developed from this informally. This is the least formal method and the least expensive to undertake. Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may be undertaken. This may be most appropriate in highintegrity systems involving safety or security. Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. This can be very expensive and is only practically worthwhile if the cost of mistakes is extremely high (e.g., in critical parts of microprocessor design).

SERVICE-ORIENTED ARCHITECTURE A newer approach to providing a distributed computing model is service-oriented architecture (SOA). It operates on the theory of providing web-based communication functionality without each application requiring redundant code to be written per application. SOA is considered a software assurance best

practice because it uses standardized interfaces and components called service brokers to facilitate communication among web-based applications. Let’s look at some implementations. Security Assertions Markup Language (SAML) Security Assertion Markup Language (SAML) is a security attestation model built on XML and SOAP-based services that allows for the exchange of authentication and authorization data between systems and supports federated identity management. SAML is covered in depth in Chapter 8, “Security Solutions for Infrastructure Management.” Simple Object Access Protocol (SOAP) Simple Object Access Protocol (SOAP) is a protocol specification for exchanging structured information in the implementation of web services in computer networks. The SOAP specification defines a messaging framework which consists of

The SOAP processing model: Defines the rules for processing a SOAP message The SOAP extensibility model: Defines the concepts of SOAP features and SOAP modules The SOAP binding framework: Describes the rules for defining a binding to an underlying protocol that can be used for exchanging SOAP messages between SOAP nodes The SOAP message: Defines the structure of a SOAP message

One of the disadvantages of SOAP is the verbosity of its operation. This has led many developers to use the REST architecture, discussed next, instead. From a security perspective, while the SOAP body can be partially or completely

encrypted, the SOAP header is not encrypted and allows intermediaries to view the header data. Representational State Transfer (REST) Representational State Transfer (REST) is a client/server model for interacting with content on remote systems, typically using HTTP. It involves accessing and modifying existing content and also adding content to a system in a particular way. REST does not require a specific message format during HTTP resource exchanges. It is up to a RESTful web service to choose which formats are supported. RESTful services are services that do not violate required restraints. XML and JavaScript Object Notation (JSON) are two of the most popular formats used by RESTful web services. JSON is a simple text-based message format that is often used with RESTful web services. Like XML, it is designed to be readable, and this can help when debugging and testing. JSON is derived from JavaScript and, therefore, is very popular as a data format in web applications. REST/JSON has several advantages over SOAP/XML:

Size: REST/JSON is a lot smaller and less bloated than SOAP/XML. Therefore, much less data is passed over the network, which is particularly important for mobile devices. Efficiency: REST/JSON makes it easier to parse data, thereby making it easier to extract and convert the data. As a result, it requires much less from the client’s CPU. Caching: REST/JSON provides improved response times and server loading due to support from caching. Implementation: REST/JSON interfaces are much easier than SOAP/XML to design and implement.

SOAP/XML is generally preferred in transactional services such as banking services. Microservices An SOA microservice is a self-contained piece of business functionality with clear interfaces, not a layer in a monolithic application. It is a variant of the SOA structural style and arranges an application as a collection of these loosely coupled services. The focus is on building single-function modules with well-defined interfaces and operations. Figure 9-13 shows the microservices architecture in comparison with a typical monolithic structure.

FIGURE 9-13 Microservices

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 9-5 lists a reference of these key topics and the page numbers on which each is found.

Table 9-5 Key Topics in Chapter 9

Key Topic Element

Description

Page Number

Bulleted list

SCEP authorization mechanisms

2 5 8

Bulleted list

Countermeasures to maintenance hooks

2 6 0

Figure 9-2

Cross-site request forgery

2 6 1

Bulleted list

Countermeasures to CSRF

2 6 1

Figure 9-3

Click-jacking

2 6 2

Bulleted list

NIST recommendations for hardware/embedded device analysis

2 6 4

Figure 9-6

Secure boot

2 6 5

Number ed list

SDLC

2 6 7

Number ed list

Stress testing

2 7 2

Bulleted list

Types of regression testing

2 7 3

Bulleted list

Types of code review

2 7 3

Table 92

Black-box, gray-box, and white-box testing comparison

2 7 4

Number ed list

Code review process

2 7 5

Bulleted list

Output encoding types

2 7 6

Bulleted list

Network authentication protocols

2 7 9

Table 93

Authentication protocols comparison

2 8 0

Bulleted list

802.1X components

2 8 1

Table 94

RADIUS and TACACS+ comparison

2 8 2

Bulleted list

Biometric terms

2 8 2

Figure 9-12

Biometric enrollment and authentication process

2 8 4

Bulleted list

Guidelines for parameterized queries

2 8 5

Bulleted list

Dynamic analysis approaches

2 8 6

Bulleted list

Formal method levels for code review

2 8 6

Bulleted list

SOAP specification framework

2 8 7

Bulleted list

REST/JSON advantages over SOAP/XML

2 8 8

Figure 9-13

Microservices architecture vs. a typical monolithic structure

2 8 9

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: corporate-owned, personally enabled (COPE) application wrapping remote wipes Simple Certificate Enrollment Protocol (SCEP) maintenance hook time-of-check/time-of-use cross-site request forgery (CSRF) click-jacking embedded System-on-Chip (SoC) software development life cycle (SDLC) DevSecOps user acceptance testing (UAT) stress testing security regression testing input validation output encoding parameterized queries static analysis dynamic analysis formal methods service-oriented architecture (SOA) Security Assertions Markup Language (SAML) Simple Object Access Protocol (SOAP) Representational State Transfer (REST) microservice

REVIEW QUESTIONS 1. ____________________ is a strategy in which an organization purchases mobile devices for users and users manage those devices. 2. List at least one step in the NIST SP 800-163 Rev 1 process. 3. Match the terms on the left with their definitions on the right.

Terms

Definitions

Maint enanc e hooks

Attack that causes an end user to execute unwanted actions on a web application in which the user is currently authenticated

Timeofcheck /time -ofuse attack s

Attack that crafts a transparent page or frame over a legitimate-looking page that entices the user to click something

Cross -site reque st forger y (CSR F)

Attack that attempts to take advantage of the sequence of events that occurs as the system completes common tasks

Clickjackin g

A set of instructions built into the code that allows someone who knows about the so-called backdoor to use the instructions to connect to view and edit the code without using the normal access controls

4. _______________ is a client/server model for interacting with content on remote systems, typically using HTTP. 5. List at least two advantage of REST/JSON over SOAP/XML. 6. Match the terms on the left with their definitions on the right.

Ter ms

Definitions

E m b e d d e d sy st e m

An integrated circuit that includes all components of a computer or another electronic system

S o C

Provides a predictable framework of procedures designed to identify all requirements with regard to functionality, cost, reliability, and delivery schedule and ensure that each is met in the final solution

S D L C

A computer system with a dedicated function within a larger system

D e v S

Development concept, emphasizing security, that grew out of the DevOps approach

ec O p s

7. __________________________ determines the workload that the application can withstand. 8. List at least two forms of code review. 9. Match the terms on the left with their definitions on the right.

Terms

Definitions

Regression testing

Also called translucent testing, as the tester has partial knowledge

Gray-box testing

Internal workings of the application are fully known

White-box testing

Internal workings of the application are not known

Black-box testing

Testing the security after a change is made to the software

10. ___________________ is a method to encode information in a Uniform Resource Identifier.

Chapter 10

Hardware Assurance Best Practices This chapter covers the following topics related to Objective 2.3 (Explain hardware assurance best practices) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Hardware root of trust: Introduces the Trusted Platform Module (TPM) and hardware security module (HSM). eFuse: Covers the dynamic real-time reprogramming of computer chips. Unified Extensible Firmware Interface (UEFI): Discusses the newer UEFI firmware interface. Trusted foundry: Describes a program for hardware sourcing assurance. Secure processing: Covers Trusted Execution, secure enclave, processor security extensions, and atomic execution. Anti-tamper: Explores methods of preventing physical attacks. Self-encrypting drive: Covers automatic drive protections. Trusted firmware updates: Discusses methods for safely acquiring firmware updates. Measured Boot and attestation: Covers boot file protections. Bus encryption: Describes the use of encrypted program instructions on a data bus.

Organizations acquire hardware and services as part of day-today business. The supply chain for tangible property is vital to every organization. An organization should understand all risks for the supply chain and implement a risk management

program that is appropriate for it. This chapter discusses best practices for ensuring that all hardware is free of security issues out of the box.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these ten self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 10-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 10-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Hardware Root of Trust

1

eFuse

2

Unified Extensible Firmware Interface (UEFI)

3

Trusted Foundry

4

Secure Processing

5

Anti-Tamper

6

Self-encrypting Drive

7

Trusted Firmware Updates

8

Measured Boot and Attestation

9

Bus Encryption

10

1. Which of the following is a draft publication that gives guidelines on hardware-rooted security in mobile devices? 1. NIST SP 800-164 2. IEEE 802.11ac 3. FIPS 120 4. IEC/IOC 270017

2. Which of the following allows for the dynamic real-time reprogramming of computer chips? 1. TAXII 2. eFuse 3. UEFI 4. TPM

3. Which of the following is designed as a replacement for the traditional PC BIOS? 1. TPM 2. Secure boot 3. UEFI 4. NX bit

4. Which of the following ensures that systems have access to leading-edge integrated circuits from secure, domestic sources? 1. DoD

2. FIPS 120 3. OWASP 4. Trusted Foundry

5. Which of the following is a part of an operating system that cannot be compromised even when the operating system kernel is compromised? 1. Secure enclave 2. Processor security extensions 3. Atomic execution 4. XN bit

6. Which of the following technologies can zero out sensitive data if it detects penetration of its security and may even do this with no power? 1. TPM 2. Anti-tamper 3. Secure enclave 4. Measured boot

7. Which of the following is used to provide transparent encryption on self-encrypting drives? 1. DEK 2. TPM 3. UEFI 4. ENISA

8. Which of the following is the key to trusted firmware updates? 1. Obtain firmware updates only from the vendor directly 2. Use a third-party facilitator to obtain updates

3. Disable Secure Boot 4. Follow the specific directions with the update

9. Windows Secure Boot is an example of what technology? 1. Security extensions 2. Secure enclave 3. UEFI 4. Measured boot

10. What is used by newer Microsoft operating systems to protect certificates, BIOS, passwords, and program authenticity? 1. Security extensions 2. Bus encryption 3. UEFI 4. Secure enclaves

FOUNDATION TOPICS HARDWARE ROOT OF TRUST NIST SP 800-164 is a draft Special Publication that gives guidelines on hardware-rooted security in mobile devices. It defines three required security components for mobile devices: Roots of Trust (RoTs), an application programming interface (API) to expose the RoTs to the platform, and a Policy Enforcement Engine (PEnE). Roots of Trust are the foundation of assurance of the trustworthiness of a mobile device. RoTs must always behave in an expected manner because their misbehavior cannot be detected. Hardware RoTs are preferred over software RoTs due to their immutability, smaller attack surfaces, and more reliable

behavior. They can provide a higher degree of assurance that they can be relied upon to perform their trusted function or functions. Software RoTs could provide the benefit of quick deployment to different platforms. To support device integrity, isolation, and protected storage, devices should implement the following RoTs:

Root of Trust for Storage (RTS) Root of Trust for Verification (RTV) Root of Trust for Integrity (RTI) Root of Trust for Reporting (RTR) Root of Trust for Measurement (RTM)

The RoTs need to be exposed by the operating system to applications through an open API. This provides application developers a set of security services and capabilities they can use to secure their applications and protect the data they process. By providing an abstracted layer of security services and capabilities, these APIs can reduce the burden on application developers to implement low-level security features, and instead allow them to reuse trusted components provided in the RoTs and the OS. The APIs should be standardized within a given mobile platform and, to the extent possible, across platforms. Applications can use the APIs, and the associated RoTs, to request device integrity reports, protect data through encryption services provided by the RTS, and store and retrieve authentication credentials and other sensitive data. The PEnE enforces policies on the device with the help of other device components and enables the processing, maintenance, and management of policies on both the device and in the information owners’ environments. The PEnE provides information owners with the ability to express the control they

require over their information. The PEnE needs to be trusted to implement the information owner’s requirements correctly and to prevent one information owner’s requirements from adversely affecting another’s. To perform key functions, the PEnE needs to be able to query the device’s configuration and state. Mobile devices should implement the following three mobile security capabilities to address the challenges with mobile device security: Device integrity: Device integrity is the absence of corruption in the hardware, firmware, and software of a device. A mobile device can provide evidence that it has maintained device integrity if its software, firmware, and hardware configurations can be shown to be in a state that is trusted by a relying party. Isolation: Isolation prevents unintended interaction between applications and information contexts on the same device. Protected storage: Protected storage preserves the confidentiality and integrity of data on the device while at rest, while in use (in the event an unauthorized application attempts to access an item in protected storage), and upon revocation of access.

Trusted Platform Module (TPM) Controlling network access to devices is helpful, but in many cases, devices such as laptops, tablets, and smartphones leave your network, leaving behind all the measures you have taken to protect the network. There is also a risk of these devices being stolen or lost. For these situations, the best measure to take is full disk encryption. The best implementation of full disk encryption requires and makes use of a Trusted Platform Module (TPM) chip. A TPM chip is a security chip installed on a computer’s motherboard that is responsible for protecting symmetric and asymmetric keys, hashes, and digital certificates. This chip provides services to protect passwords and encrypt drives and

digital rights, making it much harder for attackers to gain access to the computers that have TPM chips enabled. Two particularly popular uses of TPM are binding and sealing. Binding actually “binds” the hard drive through encryption to a particular computer. Because the decryption key is stored in the TPM chip, the hard drive’s contents are available only when the drive is connected to the original computer. But keep in mind that all the contents are at risk if the TPM chip fails and a backup of the key does not exist. Sealing, on the other hand, “seals” the system state to a particular hardware and software configuration. This prevents attackers from making any changes to the system. However, it can also make installing a new piece of hardware or a new operating system much harder. The system can only boot after the TPM chip verifies system integrity by comparing the original computed hash value of the system’s configuration to the hash value of its configuration at boot time. A TPM chip consists of both static memory and versatile memory that is used to retain the important information when the computer is turned off:

Endorsement key (EK): The EK is persistent memory installed by the manufacturer that contains a public/private key pair. Storage root key (SRK): The SRK is persistent memory that secures the keys stored in the TPM. Attestation identity key (AIK): The AIK is versatile memory that ensures the integrity of the EK. Platform configuration register (PCR) hash: A PCR hash is versatile memory that stores data hashes for the sealing function. Storage keys: A storage key is versatile memory that contains the keys used to encrypt the computer’s storage, including hard drives,

USB flash drives, and so on.

BitLocker and BitLocker to Go by Microsoft are well-known full disk encryption products. The former is used to encrypt hard drives, including operating system drives, and the latter is used to encrypt information on portable devices such as USB devices. However, there are other options. Additional whole disk encryption products include PGP Whole Disk Encryption SecurStar DriveCrypt Sophos SafeGuard Trend Micro Maximum Security

Virtual TPM A virtual TPM (vTPM) chip is a software object that performs the functions of a TPM chip. It is a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. A vTPM makes secure storage and cryptographic functions available to operating systems and applications running in virtual machines. Figure 10-1 shows one possible implementation of vTPM by IBM. The TPM chip in the host system is replaced by a more powerful vTPM (PCIXCC-vTPM). The virtual machine (VM) named Dom-TPM is a VM whose only purpose is to proxy for the PCIXCC-vTPM and make TPM instances available to all other VMs running on the system.

A diagrammatic illustration of virtual TPM is presented.

FIGURE 10-1 vTPM Possible Solution 1

Another possible approach suggested by IBM is to run vTPMs on each VM, as shown in Figure 10-2. In this case, the VM named Dom-TPM talks to the physical TPM chip in the host and maintains separate TPM instances for each VM.

Figure 10-2 vTPM Possible Solution 2 Hardware Security Module (HSM) A hardware security module (HSM) is an appliance that safeguards and manages digital keys used with strong authentication and provides crypto processing. It attaches directly to a computer or server. Among the functions of an HSM are

Onboard secure cryptographic key generation Onboard secure cryptographic key storage and management Use of cryptographic and sensitive data material Offloading of application servers for complete asymmetric and symmetric cryptography

HSM devices can be used in a variety of scenarios, including the following: In a PKI environment to generate, store, and manage key pairs In card payment systems to encrypt PINs and to load keys into protected memory To perform the processing for applications that use TLS/SSL In Domain Name System Security Extensions (DNSSEC; a secure form of DNS that protects the integrity of zone files) to store the keys used to sign the zone file

There are some drawbacks to an HSM, including the following:

High cost Lack of a standard for the strength of the random number generator Difficulty in upgrading

When selecting an HSM product, you must ensure that it provides the services needed, based on its application. Remember that each HSM has different features and different encryption technologies, and some HSM devices might not support a strong enough encryption level to meet an enterprise’s needs. Moreover, you should keep in mind the portable nature of these devices and protect the physical security of the area where they are connected. MicroSD HSM A microSD HSM is an HSM that connects to the microSD port on a device that has such a port. The card is specifically suited for mobile apps written for Android and is supported by most Android phones and tablets with a microSD card slot.

Moreover, some microSD cards can be made to support various cryptographic algorithms, such as AES, RSA, SHA-1, SHA-256, and Triple DES, as well as the Diffie-Hellman key exchange. This is an advantage over microSD cards that do not support this, which enables them to provide the same protections as microSD HSM.

EFUSE Computer logic is generally hard-coded onto a chip and cannot be changed after the chip is manufactured. An eFuse allows for the dynamic real-time reprogramming of computer chips. Utilizing a set of eFuses, a chip manufacturer can allow for the circuits on a chip to change while it is in operation. One use is to prevent downgrading the firmware of a device. Systems equipped with an eFuse will check the number of burnt fuses before attempting to install new firmware. If too many fuses are burnt (meaning the firmware to be installed is older than the current firmware), then the bootloader will prevent installation of the older firmware. An eFuse can also be used to help secure a stolen device. For example, the Samsung eFuse uses an eFuse to indicate when an untrusted (non-Samsung) path is discovered. Once the eFuse is set (when the path is discovered), the device cannot read the data previously stored.

UNIFIED EXTENSIBLE FIRMWARE INTERFACE (UEFI) A computer’s BIOS contains the basic instruction that a computer needs to boot and load the operating system from a drive. The process of updating the BIOS with the latest software is referred to as flashing the BIOS. Security professionals

should ensure that any BIOS updates are obtained from the BIOS vendor and have not been tampered with in any way. The traditional BIOS has been replaced with the Unified Extensible Firmware Interface (UEFI). UEFI maintains support for legacy BIOS devices, but is considered a more advanced interface than traditional BIOS. BIOS uses the master boot record (MBR) to save information about the hard drive data, while UEFI uses the GUID partition table (GPT). BIOS partitions were a maximum of 4 partitions, each being only 2 terabytes (TB). UEFI allows up to 128 partitions, with the total disk limit being 9.4 zettabytes (ZB) or 9.4 billion terabytes. UEFI is also faster and more secure than traditional BIOS. UEFI Secure Boot requires boot loaders to have a digital signature. UEFI is an open standard interface layer between the firmware and the operating system that requires firmware updates to be digitally signed. Security professionals should understand the following points regarding UEFI: Designed as a replacement for traditional PC BIOS. Additional functionality includes support for Secure Boot, network authentication, and universal graphics drivers. Protects against BIOS malware attacks including rootkits.

Secure Boot requires that all boot loader components (e.g., OS kernel, drivers) attest to their identity (digital signature) and the attestation is compared to the trusted list. More on Secure/Measured Boot and attestation will be covered later in the “Measured Boot and Attestation” section. When a computer is manufactured, a list of keys that identify trusted hardware, firmware, and operating system loader code (and in some instances, known malware) is embedded in the UEFI. Ensures the integrity and security of the firmware. Prevents malicious files from being loaded.

Can be disabled for backward compatibility.

UEFI operates between the OS layer and the firmware layer, as shown in Figure 10-3.

Figure 10-3 UEFI

TRUSTED FOUNDRY You must be concerned with the safety and the integrity of the hardware that you purchase. The following are some of the methods used to provide this assurance:

Trusted Foundry: The Trusted Foundry program can help you exercise care in ensuring the authenticity and integrity of the components of hardware purchased from a vendor. This U.S. Department of Defense (DoD) program identifies “trusted vendors” and ensures a “trusted supply chain.” A trusted supply chain begins with trusted design and continues with trusted mask, foundry, packaging/assembly, and test services. It ensures that systems have access to leading-edge integrated circuits from secure, domestic sources. At the time of this writing, 77 vendors have been certified as trusted.

Source authenticity of hardware: When purchasing hardware to support any network or security solution, a security professional must ensure that the hardware’s authenticity can be verified. Just as expensive consumer items such as purses and watches can be counterfeited, so can network equipment. While the dangers with counterfeit consumer items are typically confined to a lack of authenticity and potentially lower quality, the dangers presented by counterfeit network gear can extend to the presence of backdoors in the software or firmware. Always purchase equipment directly from the manufacturer when possible, and when purchasing from resellers, use caution and insist on a certificate of authenticity. In any case where the price seems too good to be true, keep in mind that it may be an indication the gear is not authentic. OEM documentation: One of the ways you can reduce the likelihood of purchasing counterfeit equipment is to insist on the inclusion of verifiable original equipment manufacturer (OEM) documentation. In many cases, this paperwork includes anticounterfeiting features. Make sure to use the vendor website to verify all the various identifying numbers in the documentation.

SECURE PROCESSING Secure processing is a concept that encompasses a variety of technologies to prevent any insecure actions on the part of the CPU or processor. In some cases these technologies involve securing the actions of the processor itself, while other approaches tackle the issue where the data is stored. This section introduces some of these technologies and approaches. Trusted Execution Trusted Execution (TE) is a collection of features that is used to verify the integrity of the system and implement security policies, which together can be used to enhance the trust level of the complete system. An example is the Intel Trusted Execution Technology (Intel TXT). This approach is shown in Figure 10-4.

FIGURE 10-4 Intel Trusted Execution Technology Secure Enclave A secure enclave is a part of an operating system that cannot be compromised even when the operating system kernel is compromised, because the enclave has its own CPU and is separated from the rest of the system. This means security functions remain intact even when someone has gained control of the OS. Secure enclaves are a relatively recent technology being developed to provide additional security. Cisco, Microsoft, and Apple all have implementations of secure enclaves that differ in implementation but all share the same goal of creating an area that cannot be compromised even when the OS is. Processor Security Extensions Processor security extensions are sets of security-related instruction codes that are built into some modern CPUs. An example is Intel Software Guard Extensions (Intel SGX). It defines private regions of memory, called enclaves, whose contents are protected and unable to be either read or saved by any process outside the enclave itself, including processes running at higher privilege levels.

Another processor security technique is the use of the NX and XN bits. These bits are related to processors. Their respective meanings are as follows:

NX (no-execute) bit: Technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or storage of data XN (never execute) bit: Method for specifying areas of memory that cannot be used for execution

When these bits are available in the architecture of the system, they can be used to protect sensitive information from memory attacks. By utilizing the capability of the NX bit to segregate memory into areas where storage of processor instructions (code) and storage of data are kept separate, many attacks can be prevented. Also, the capability of the XN bit to mark certain areas of memory that are off-limits to execution of code can prevent other memory attacks as well. Atomic Execution Atomic execution in concurrent programming are program operations that run independently of any other processes (threads). Making the operation atomic consists of using synchronization mechanisms to make sure that the operation is seen, from any other thread, as a single, atomic operation. This increases security by preventing one thread from viewing the state of the data when the first thread is still in the middle of the operation. Atomicity also means that the operation of the thread is either completely finished or is rolled back to its initial state (there’s no such thing as partially done).

ANTI-TAMPER

Anti-tamper technology is designed to prevent access to sensitive information and encryption keys on a device. Antitamper processors, for example, store and process private or sensitive information, such as private keys or electronic money credit. The chips are designed so that the information is not accessible through external means and can be accessed only by the embedded software, which should contain the appropriate security measures, such as required authentication credentials. Some of these chips take a different approach and zero out the sensitive data if they detect penetration of their security, and some can even do this with no power. It also should not be possible for unauthorized persons to access and change the configuration of any devices. This means additional measures should be followed to prevent this. Tampering includes defacing, damaging, or changing the configuration of a device. Integrity verification programs should be used by applications to look for evidence of data tampering, errors, and omissions.

SELF-ENCRYPTING DRIVES Self-encrypting drives do exactly as the name implies: they encrypt themselves without any user intervention. The process is so transparent to the user that the user may not even be aware the encryption is occurring. It uses a unique and random data encryption key (DEK). When data is written to the drive, it is encrypted, and when the data is read from the drive, it is decrypted, as shown in Figure 10-5.

Figure 10-5 Self-encrypting drive

TRUSTED FIRMWARE UPDATES Hardware and firmware vulnerabilities are expected to become an increasing target for sophisticated attackers. While typically only successful when mounted by the skilled hands of a nationstate or advanced persistent threat (APT) group, an attack on hardware and firmware can be devastating because this firmware forms the platform for the entire device. Firmware includes any type of instructions stored in nonvolatile memory devices such as read-only memory (ROM), electrically erasable programmable read-only memory (EPROM), or Flash memory. BIOS and UEFI code are the most common examples for firmware. Computer BIOS doesn’t go bad; however, it can become out of date or contain bugs. In the case of a bug, an upgrade will correct the problem. An upgrade may also be indicated when the BIOS doesn’t support some component that you would like to install, such as a larger hard drive or a different type of processor. Today’s BIOS is typically written to an EEPROM chip and can be updated through the use of software. Each manufacturer has its own method for accomplishing this. Check out the manufacturer’s documentation for complete details. Regardless of the exact procedure used, the update process is referred to as

flashing the BIOS. It means the old instructions are erased from the EEPROM chip, and the new instructions are written to the chip. Firmware can be updated by using an update utility from the motherboard vendor. In many cases, the steps are as follows.

Step 1. Download the update file to a flash drive. Step 2. Insert the flash drive and reboot the machine. Step 3. Use the specified key sequence to enter the UEFI/BIOS setup. Step 4. If necessary, disable Secure Boot. Step 5. Save the changes and reboot again. Step 6. Re-enter the CMOS settings. Step 7. Choose the boot options and boot from the flash drive. Step 8. Follow the specific directions with the update to locate the upgrade file on the flash drive. Step 9. Execute the file (usually by typing flash). Step 10. While the update is completing, ensure that you maintain power to the device. The key to trusted firmware updates is contained in Step 1. Only obtain firmware updates from the vendor directly. Never use a third-party facilitator for this. Also make sure you verify the hash value that comes along with the update to ensure that it has not been altered since its creation.

MEASURED BOOT AND ATTESTATION Attestation is the process of insuring or attesting to the fact that a piece of software or firmware has integrity or that it has not

been altered from its original state. It is used in several boot methods to check all elements used in the boot process to ensure that malware has not altered the files or introduced new files into the process. Let’s look at some of these Secure Boot methods. Measured Boot, also known as Secure Boot, is a term that applies to several technologies that follow the Secure Boot standard. Its implementations include Windows Secure Boot, measured launch, and Integrity Measurement Architecture (IMA). Figure 10-6 shows the three main actions related to Secure Boot in Windows, which are described in the following list: 1. The firmware verifies all UEFI executable files and the OS loader to be sure they are trusted. 2. Windows boot components verify the signature on each component to be loaded. Any untrusted components are not loaded and trigger remediation. 3. The signatures on all boot-critical drivers are checked as part of Secure Boot verification in Winload (Windows Boot Loader) and by the Early Launch Anti-Malware driver.

Figure 10-6 Secure Boot The disadvantage is that systems that ship with UEFI Secure Boot enabled do not allow the installation of any other operating system. This prevents installing any other operating systems or running any live Linux media. Measured Launch A measured launch is a launch in which the software and platform components have been identified, or “measured,” using cryptographic techniques. The resulting values are used at each boot to verify trust in those components. A measured launch is designed to prevent attacks on these components (system and BIOS code) or at least to identify when these components have been compromised. It is part of Intel TXT. TXT functionality is leveraged by software vendors including HyTrust, PrivateCore, Citrix, and VMware.

An application of measured launch is Measured Boot by Microsoft in Windows 10 and Windows Server 2019. It creates a detailed log of all components that loaded before the antimalware. This log can be used to both identify malware on the computer and maintain evidence of boot component tampering. One possible disadvantage of measured launch is potential slowing of the boot process. Integrity Measurement Architecture Another approach that attempts to create and measure the runtime environment is an open source trusted computing component called Integrity Measurement Architecture (IMA), mentioned earlier in this chapter. IMA creates a list of components and anchors the list to the TPM chip. It can use the list to attest to the system’s runtime integrity.

BUS ENCRYPTION The CPU is connected to an address bus. Memory and I/O devices recognize this address bus. These devices can then communicate with the CPU, read requested data, and send it to the data bus. Bus encryption protects the data traversing these buses. Bus encryption is used by newer Microsoft operating systems to protect certificates, BIOS, passwords, and program authenticity. Bus encryption is necessary not only to prevent tampering of encrypted instructions that may be easily discovered on a data bus or during data transmission, but also to prevent discovery of decrypted instructions that may reveal security weaknesses that an intruder can exploit.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the

exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 10-2 lists a reference of these key topics and the page numbers on which each is found.

Table 10-2 Key Topics in Chapter 10

Key Topic Element

Description

Page Number

Bulleted list

Hardware RoTs

29 8

Bulleted list

TPM contents

30 0

Figure 101

vTPM possible solution 1

301

Figure 102

vTPM possible solution 2

301

Bulleted list

Functions of an HSM

30 2

Bulleted list

Drawbacks to an HSM

30 2

Section

Security features of eFuse

30 3

Figure 103

UEFI operations

30 4

Bulleted list

Methods used to provide hardware assurance

30 5

Bulleted list

NX and XN bits

30 7

Step list

Steps to updating firmware

30 9

Figure 106

Secure Boot

310

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: Roots of Trust (RoTs) Trusted Platform Module (TPM) virtual TPM (vTPM) hardware security module (HSM) microSD HSM eFuse Unified Extensible Firmware Interface (UEFI) Secure Boot attestation Trusted Foundry secure processing Trusted Execution secure enclave processor security extensions

atomic execution anti-tamper technology self-encrypting drives Measured Boot bus encryption

REVIEW QUESTIONS 1. RoTs need to be exposed by the operating system to applications through an open ___________. 2. List at least one of the contents of a TPM chip. 3. Match the following terms with their definitions.

Term s

Definitions

Vir tua l TP M

An appliance that safeguards and manages digital keys used with strong authentication and provides crypto processing

HS M

Allows for the dynamic real-time reprogramming of computer chips

eF use

A more advanced interface than traditional BIOS

UE FI

A software object that performs the functions of a TPM chip

4. _______________ requires that all boot loader components (e.g., OS kernel, drivers) attest to their identity (digital signature) and the attestation is compared to the trusted list.

5. List the Intel example of the implementation of processor security extensions. 6. Match the following terms with their definitions.

Terms

Definitions

Firm ware

Using synchronization mechanisms to make sure that the operation is seen, from any other thread, as a single operation

Atom ic execu tion

Any type of instructions stored in non-volatile memory devices such as read-only memory (ROM)

Meas ured Boot

Used by newer Microsoft operating systems to protect certificates, BIOS, passwords, and program authenticity

Bus encry ption

Process where the firmware verifies all UEFI executable files and the OS loader to be sure they are trusted

7. _____________ creates a list of components and anchors the list to the TPM chip. It can use the list to attest to the system’s runtime integrity. 8. What is the disadvantage of systems that ship with UEFI Secure Boot enabled? 9. Match the following terms with their definitions.

Terms

Definitions

NX bit

Used to encrypt self-encrypting drives

Rando

Method for specifying areas of memory that cannot be

m data encrypt ion key (DEK)

used for execution

XN bit

A collection of features that is used to verify the integrity of the system and implement security policies, which together can be used to enhance the trust level of the complete system

Trusted Executi on (TE)

Technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or storage of data

10. The traditional BIOS has been replaced with the ____________________.

Chapter 11

Analyzing Data as Part of Security Monitoring Activities This chapter covers the following topics related to Objective 3.1 (Given a scenario, analyze data as part of security monitoring activities) of the CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam: Heuristics: Discusses how the heuristics process works. Trend analysis: Covers the use of trend data. Endpoint: Topics include malware, memory, system and application behavior, file system, and user and entity behavior analytics (UEBA). Network: Covers URL and DNS analysis, flow analysis, and packet and protocol analysis. Log review: Includes event logs, Syslog, firewall logs, web application firewall (WAF), proxy, and intrusion detection system (IDS)/intrusion prevention system (IPS). Impact analysis: Compares organization impact vs. localized impact and immediate vs. total impact. Security information and event management (SIEM) review: Discusses rule writing, known-bad Internet Protocol (IP), and the dashboard. Query writing: Explains string search, scripting, and piping. E-mail analysis: Examines malicious payload, DomainKeys Identified Mail (DKIM), Domain-based Message Authentication, Reporting, and Conformance (DMARC), Sender Policy Framework

(SPF), phishing, forwarding, digital signature, e-mail signature block, embedded links, impersonation, and header.

Security monitoring activities generate a significant (maybe even overwhelming) amount of data. Identifying what is relevant and what is not requires that you not only understand the various data formats that you encounter, but also recognize data types and activities that indicate malicious activity. This chapter explores the data analysis process.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 11-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 11-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questio n

Heuristics

1

Trend Analysis

2

Endpoint

3

Network

4

Log Review

5

Impact Analysis

6

Security Information and Event Management (SIEM) Review

7

Query Writing

8

E-mail Analysis

9

1. Which of the following determines the susceptibility of a system to a particular threat or risk using decision rules or weighing methods? 1. Heuristics 2. Trend analysis 3. SPF 4. Regression analysis

2. Which of the following is not an example of utilizing trend analysis? 1. An increase in the use of a SQL server, indicating the need to increase resources on the server 2. The identification of threats based on behavior that typically accompanies such threats 3. A cessation in traffic bound for a server providing legacy services, indicating a need to decommission the server 4. An increase in password resets, indicating a need to revise the password policy

3. Which of the following discusses implementing endpoint protection platforms (EPPs)?

1. IEC 270017 2. FIPS 120 3. NIST SP 800-128 4. PCI DSS

4. Which of the following is a free online service for testing and analyzing URLs, helping with identification of malicious content on websites? 1. URLVoid 2. URLSec 3. SOA 4. urlQuery

5. Which of the following is a protocol that can be used to collect logs from devices and store them in a central location? 1. Syslog 2. DNSSec 3. URLQuery 4. SMTP

6. When you are determining what role the quality of the response played in the severity of the issue, what type of analysis are you performing? 1. Trend analysis 2. Impact analysis 3. Log analysis 4. Reverse engineering

7. Which type of SIEM rule is typically used in worm/malware outbreak scenarios?

1. Cause and effect 2. Trending 3. Transitive or tracking 4. Single event

8. Which of the following is used to look within a log file or data stream and locate any instances of a combination of characters? 1. Script 2. Pipe 3. Transitive search 4. String search

9. Which of the following enables you to verify the source of an e-mail by providing a method for validating a domain name identity that is associated with a message through cryptographic authentication? 1. DKIM 2. DNSSec 3. IPsec 4. AES

FOUNDATION TOPICS HEURISTICS When analyzing security data, sometimes it is difficult to see the forest for the trees. Using scripts, algorithms, and other processes to assist in looking for the information that really matters makes the job much easier and ultimately more successful.

Heuristics is a type of analysis that determines the susceptibility of a system to a particular threat/risk by using decision rules or weighing methods. Decision rules are preset to allow the system to make decisions, and weighing rules are used within the decision rules to enable the system to make value judgments among options. Heuristics is often utilized by antivirus software to identify threats that signature analysis can’t discover because the threats either are too new to have been analyzed (called zero-day threats) or are multipronged attacks that are constructed in such a way that existing signatures do not identify them. Many IDS/IPS solutions also can use heuristics to identify threats.

TREND ANALYSIS In many cases, the sheer amount of security data that is generated by the various devices located throughout our environments makes it difficult to see what is going on. When this same raw data is presented to us in some sort of visual format, it becomes somewhat easier to discern patterns and trends. Aggregating the data and graphing it makes spotting a trend much easier. Trend analysis focuses on the long-term direction in the increase or decrease in a particular type of traffic or in a particular behavior in the network. Some examples include the following: An increase in the use of a SQL server, indicating the need to increase resources on the server A cessation in traffic bound for a server providing legacy services, indicating a need to decommission the server An increase in password resets, indicating a need to revise the password policy

Many vulnerability scanning tools include a preconfigured filter for scan results that both organizes vulnerabilities found by severity and charts the trend (up or down) for each severity level. For example, suppose you were interested in getting a handle on the relative breakdown of security events between your Windows devices and your Linux devices. Most tools that handle this sort of thing can not only aggregate all events of a certain type but graph them over time. Figure 11-1 shows examples of such graphs.

Figure 11-1 Trend Analysis

ENDPOINT Many of the dangers to our environments come through the endpoints. Endpoint security is a field of security that attempts to protect individual endpoints in a network by staying in constant contact with these individual endpoints from a central

location. It typically works on a client/server model in that each endpoint has software that communicates with the software on the central server. The functionality provided can vary. In its simplest form, endpoint security includes monitoring and automatic updating and configuration of security patches and personal firewall settings. In more advanced systems, endpoint security might include examination of the system each time it connects to the network. This examination would ensure that all security patches are up to date. In even more advanced scenarios, endpoint security could automatically provide remediation to the computer by installing missing security patches. In either case, the computer would not be allowed to connect to the network until the problem is resolved, either manually or automatically. Other measures include using device or drive encryption, enabling remote management capabilities (such as remote wipe and remote locate), and implementing device ownership policies and agreements so that the organization can manage or seize the device. Endpoint security can mitigate issues such as the following: Malware of all types Data exfiltration

NIST SP 800-128 discusses implementing endpoint protection platforms (EPPs). According to NIST SP 800-128, endpoints (that is, laptops, desktops, mobile devices) are a fundamental part of any organizational system. Endpoints are an important source of connecting end users to networks and systems, and are also a major source of vulnerabilities and a frequent target of attackers looking to penetrate a network. User behavior is difficult to control and hard to predict, and user actions, whether clicking on a link that executes malware or changing a security setting to improve the usability of the endpoint, frequently allow exploitation of vulnerabilities.

Commercial vendors offer a variety of products to improve security at the endpoints of a network. These EPPs include the following:

Anti-malware: Anti-malware applications are part of the common secure configurations for system components. Anti-malware software employs a wide range of signatures and detection schemes, automatically updates signatures, disallows modification by users, runs scans on a frequently scheduled basis, has an auto-protect feature set to scan automatically when a user action is performed (for example, opening or copying a file), and may provide protection from zero-day attacks. For platforms for which antimalware software is not available, other forms of anti-malware such as rootkit detectors may be employed. Personal firewalls: Personal firewalls provide a wide range of protection for host machines including restriction on ports and services, control against malicious programs executing on the host, control of removable devices such as USB devices, and auditing and logging capability. Host-based intrusion detection and prevention system (IDPS): Host-based IDPS is an application that monitors the characteristics of a single host and the events occurring within that host to identify and stop suspicious activity. This is distinguished from a network-based IDPS, which is an intrusion detection and prevention system that monitors network traffic for particular network segments or devices and analyzes the network and application protocol activity to identify and stop suspicious activity. Restrict the use of mobile code: Organizations should exercise caution in allowing the use of mobile code such as ActiveX, Java, and JavaScript. An attacker can easily attach a script to a URL in a web page or e-mail that, when clicked, executes malicious code within the computer’s browser.

Security professionals may also want to read NIST SP 800-111, which provides guidance to storage encryption technologies for end-user devices. In addition, NIST provides checklists for

implementing different operating systems according to the U.S. Government Configuration Baseline (USGCB). Malware Malicious software (or malware) is any software that harms a computer, deletes data, or takes actions the user did not authorize. It includes a wide array of malware types, including ones you have probably heard of such as viruses, and many you might not have heard of, but of which you should be aware. The malware that you need to understand includes the following:

Virus Boot sector virus Parasitic virus Stealth virus Polymorphic virus Macro virus Multipartite virus Worm Trojan horse Logic bomb Spyware/adware Botnet Rootkit Ransomware

Virus A virus is a self-replicating program that infects software. It uses a host application to reproduce and deliver its payload and

typically attaches itself to a file. It differs from a worm in that it usually requires some action on the part of the user to help it spread to other computers. The following list briefly describes various virus types:

Boot sector: This type of virus infects the boot sector of a computer and either overwrites files or installs code into the sector so that the virus initiates at startup. Parasitic: This type of virus attaches itself to a file, usually an executable file, and then delivers the payload when the program is used. Stealth: This type of virus hides the modifications that it is making to the system to help avoid detection. Polymorphic: This type of virus makes copies of itself, and then makes changes to those copies. It does this in hopes of avoiding detection from antivirus software. Macro: This type of virus infects programs written in Word, Basic, Visual Basic, or VBScript that are used to automate functions. Macro viruses infect Microsoft Office files and are easy to create because the underlying language is simple and intuitive to apply. They are especially dangerous in that they infect the operating system itself. They also can be transported between different operating systems because the languages are platform independent. Multipartite: Originally, these viruses could infect both program files and boot sectors. This term now means that the virus can infect more than one type of object or can infect in more than one way. File or system infector: File infectors infect program files, and system infectors infect system program files. Companion: This type of virus does not physically touch the target file. It is also referred to as a spawn virus. E-mail: This type of virus specifically uses an e-mail system to spread itself because it is aware of the e-mail system functions. Knowledge of the functions allows this type of virus to take advantage of all e-mail system capabilities.

Script: This type of virus is a stand-alone file that can be executed by an interpreter.

Worm A worm is a type of malware that can spread without the assistance of the user. It is a small program that, like a virus, is used to deliver a payload. One way to help mitigate the effects of worms is to place limits on sharing, writing, and executing programs. Trojan Horse A Trojan horse is a program or rogue application that appears to or is purported to do one thing but actually does another when executed. For example, what appears to be a screensaver program might really be a Trojan horse. When the user unwittingly uses the program, it executes its payload, which could be to delete files or create backdoors. Backdoors are alternative ways to access the computer undetected in the future. One type of Trojan targets and attempts to access and make use of smart cards. A countermeasure to prevent this attack is to use “single-access device driver” architecture. Using this approach, the operating system allows only one application to have access to the serial device (and thus the smart card) at any given time. Another way to prevent the attack is by using a smart card that enforces a “one private key usage per PIN entry” policy model. In this model, the user must enter her PIN every single time the private key is used, and therefore the Trojan horse would not have access to the key. Logic Bomb A logic bomb is a type of malware that executes when a particular event takes place. For example, that event could be a time of day or a specific date or it could be the first time you

open notepad.exe. Some logic bombs execute when forensics are being undertaken, and in that case the bomb might delete all digital evidence. Spyware/Adware Adware doesn’t actually steal anything, but it tracks your Internet usage in an attempt to tailor ads and junk e-mail to your interests. Spyware also tracks your activities and can also gather personal information that could lead to identity theft. In some cases, spyware can even direct the computer to install software and change settings. Botnet A bot is a type of malware that installs itself on large numbers of computers through infected e-mails, downloads from websites, Trojan horses, and shared media. After it’s installed, the bot has the ability to connect back to the hacker’s computer. After that, the hacker’s server controls all the bots located on these machines. At a set time, the hacker might direct the bots to take some action, such as direct all the machines to send out spam messages, mount a DoS attack, or perform phishing or any number of malicious acts. The collection of computers that act together is called a botnet, and the individual computers are called zombies. The attacker that manages the botnet is often referred to as the botmaster. Figure 11-2 shows this relationship.

FIGURE 11-2 Botnet Rootkit A rootkit is a set of tools that a hacker can use on a computer after he has managed to gain access and elevate his privileges to administrator. It gets its name from the root account, the most powerful account in Unix-based operating systems. The rootkit tools might include a backdoor for the hacker to access. This is one of the hardest types of malware to remove, and in many cases only a reformat of the hard drive will completely remove it. The following are some of the actions a rootkit can take: Install a backdoor Remove all entries from the security log (log scrubbing) Replace default tools with compromised versions (Trojaned programs) Make malicious kernel changes

Ransomware Ransomware is malware that prevents or limits users from accessing their systems. It is called ransomware because it forces its victims to pay a ransom through certain online

payment methods to be given access to their systems again or to get their data back. Reverse Engineering Reverse engineering is a term that has been around for some time. Generically, it means taking something apart to discover how it works and perhaps to replicate it. In cybersecurity, it is used to analyze both hardware and software and for various other reasons, such as to do the following: Discover how malware functions Determine whether malware is present in software Locate software bugs Locate security problems in hardware

The following sections look at the role of reverse engineering in cybersecurity analysis. Isolation/Sandboxing

When conducting reverse engineering, how can you analyze malware without suffering the effects of the malware? The answer is to place the malware where you can safely probe and analyze it. This is done by isolating, or sandboxing, the malware. This process is covered more fully in the “Sandboxing” section of Chapter 12, “Implementing Configuration Changes to Existing Controls to Improve Security.” Software/Malware

Software of any type can be checked for integrity to ensure that it has not been altered since its release. Checking for integrity is one of the ways you can tell when a file has been corrupted (or perhaps replaced entirely) with malware. Two main methods are used in this process:

Fingerprinting/hashing: Fingerprinting, or hashing, is the process of using a hashing algorithm to reduce a large document or file to a character string that can be used to verify the integrity of the file (that is, whether the file has changed in any way). To be useful, a hash value must have been computed at a time when the software or file was known to have integrity (for example, at release time). At any time thereafter, the software file can be checked for integrity by calculating a new hash value and comparing it to the value from the initial calculation. If the character strings do not match, a change has been made to the software. Fingerprinting/hashing has been used for some time to verify the integrity of software downloads from vendors. The vendor provides the hash value and specifies the hash algorithm, and the customer recalculates the hash value after the download. If the result matches the value from the vendor, the customer knows the software has integrity and is safe. Anti-malware products also use this process to identify malware. The problem is that malware creators know this, and so they are constantly making small changes to malicious code to enable the code to escape detection through the use of hashes or signatures. When they make a small change, anti-malware products can no longer identify the malware, and they won’t be able to until a new hash or signature is created by the anti-malware vendor. For this reason, some vendors are beginning to use “fuzzy” hashing, which looks for hash values that are similar but not exact matches. Decomposition: Decomposition is the process of breaking something down to discover how it works. When applied to software, it is the process of discovering how the software works, perhaps who created it, and, in some cases, how to prevent the software from performing malicious activity. When used to assess malware, decomposition can be done two ways: statically and dynamically. When static or manual analysis is used, it takes hours per file and uses tools called disassemblers. Advanced expertise is required. Time is often wasted on repetitive sample unpacking and indicator extraction tasks. With dynamic analysis tools, an automated static analysis engine is used to identify, de-archive, de-obfuscate, and unpack the underlying object structure. Then proactive threat indicators (PTIs) are extracted from the unpacked files. A rules engine classifies the results to calculate the threat level and to route the extracted files for further analysis. Finally, the extracted files are repaired to

enable further extraction or analysis with a sandbox, decompiler, or debugger. While the end result may be the same, these tools are much faster and require less skill than manual or static analysis. Reverse Engineering Tools

When examples of zero-day malware have been safely sandboxed and must be analyzed or when a host has been compromised and has been safely isolated and you would like to identify details of the breach to be better prepared for the future, reverse engineering tools are indicated. The Infosec Institute recommends the following as the top reverse engineering tools for cybersecurity professionals (as of January 2019):

Apktool: This third-party tool for reverse engineering can decode resources to nearly original form and re-create them after making some adjustments. dex2jar: This lightweight API is designed to read the Dalvik Executable (.dex/.odex) format. It is used with Android and Java .class files. diStorm3: This tool is lightweight, easy to use, and has a fast decomposer library. It disassembles instructions in 16-, 32-, and 64-bit modes. It is also the fastest disassembler library. The source code is very clean, readable, portable, and platform independent. edb-debugger: This is the Linux equivalent of the famous OllyDbg debugger on the Windows platform. One of the main goals of this debugger is modularity. Jad Debugger: This is the most popular Java decompiler ever written. It is a command-line utility written in C++. Javasnoop: This Aspect Security tool allows security testers to test the security of Java applications easily. OllyDbg: This is a 32-bit, assembler-level analyzing debugger for Microsoft Windows. Emphasis on binary code analysis makes it particularly useful in cases where the source is unavailable.

Valgrind: This suite is for debugging and profiling Linux programs.

Memory A computing system needs somewhere to store information, both on a long-term basis and a short-term basis. There are two types of storage locations: memory, for temporary storage needs, and long-term storage media. Information can be accessed much faster from memory than from long-term storage, which is why the most recently used instructions or information is typically kept in cache memory for a short period of time, which ensures the second and subsequent accesses will be faster than returning to long-term memory. Computers can have both random-access memory (RAM) and read-only memory (ROM). RAM is volatile, meaning the information must continually be refreshed and will be lost if the system shuts down. Memory Protection In an information system, memory and storage are the most important resources. Damaged or corrupt data in memory can cause the system to stop functioning. Data in memory can be disclosed and therefore must be protected. Memory does not isolate running processes and threads from data. Security professionals must use processor states, layering, process isolation, abstraction, hardware segmentation, and data hiding to help keep data isolated. Most processors support two processor states: supervisor state (or kernel mode) and problem state (or user mode). In supervisor state, the highest privilege level on the system is used so that the processor can access all the system hardware and data. In problem state, the processor limits access to system hardware and data. Processes running in supervisor state are isolated from the processes that are not running in that state;

supervisor-state processes should be limited to only core operating system functions. A security professional can use layering to organize programming into separate functions that interact in a hierarchical manner. In most cases, each layer only has access to the layers directly above and below it. Ring protection is the most common implementation of layering, with the inner ring (ring 0) being the most privileged ring and the outer ring (ring 3) being the lowest privileged. The OS kernel usually runs on ring 0, and user applications usually run on ring 3. A security professional can isolate processes by providing memory address spaces for each process. Other processes are unable to access address space allotted to another process. Naming distinctions and virtual mapping are used as part of process isolation. Hardware segmentation works like process isolation. It prevents access to information that belongs to a higher security level. However, hardware segmentation enforces the policies using physical hardware controls rather than the operating system’s logical process isolation. Hardware segmentation is rare and is usually restricted to governmental use, although some organizations may choose to use this method to protect private or confidential data. Data hiding prevents data at one security level from being seen by processes operating at other security levels. Secured Memory Memory can be divided into multiple partitions. Based on the nature of data in a partition, the partition can be designated as a security-sensitive or a non-security-sensitive partition. In a security breach (such as tamper detection), the contents of a security-sensitive partition can be erased by the controller itself,

while the contents of the non-security-sensitive partition can remain unchanged (see Figure 11-3). Runtime Data Integrity Check The runtime data integrity check process ensures the integrity of the peripheral memory contents during runtime execution. The secure booting sequence generates a hash value of the contents of individual memory blocks stored in secured memory. In the runtime mode, the integrity checker reads the contents of a memory block, waits for a specified period, and then reads the contents of another memory block. In the process, the checker also computes the hash values of the memory blocks and compares them with the contents of the reference file generated during boot time. In the event of a mismatch between two hash values, the checker reports a security intrusion to a central unit that decides the action to be taken based on the security policy, as shown in Figure 11-4.

FIGURE 11-3 Secure Memory

Figure 11-4 Runtime Data Integrity Check Memory Dumping, Runtime Debugging Many penetration testing tools perform an operation called a core dump or memory dump. Applications store information in memory, and this information can include sensitive data, passwords, usernames, and encryption keys. Hackers can use memory-reading tools to analyze the entire memory content used by an application. Any vulnerability testing should take this into consideration and utilize the same tools to identify any issues in the memory of an application. The following are some examples of memory-reading tools:

Memdump: This free tool runs on Windows, Linux, and Solaris. It simply creates a bit-by-bit copy of the volatile memory on a system. KnTTools: This memory acquisition and analysis tool used with Windows systems captures physical memory and stores it to a

removable drive or sends it over the network to be archived on a separate machine. FATKit: This popular memory forensics tool automates the process of extracting interesting data from volatile memory. FATKit helps an analyst visualize the objects it finds to help in understanding the data that the tool was able to find.

Runtime debugging, on the other hand, is the process of using a programming tool to not only identify syntactic problems in code but also discover weaknesses that can lead to memory leaks and buffer overflows. Runtime debugging tools operate by examining and monitoring the use of memory. These tools are specific to the language in which the code was written. Table 11-2 shows examples of runtime debugging tools and the operating systems and languages for which they can be used.

Table 11-2 Runtime Debugging tools

Tool

Operating Systems

Languages

AddressS anitizer

Linux, Mac

C, C#

Deleaker

Windows (Visual Studio)

C, C#

software verify

Windows

.Net, C, C##, Java, JavaScript, Lua, Python, Ruby

Memory dumping can help determine what a hacker might be able to learn if she were able to cause a memory dump. Runtime debugging would be the correct approach for discovering

syntactic problems in an application’s code or to identify other issues, such as memory leaks or potential buffer overflows. System and Application Behavior Sometimes an application or system will provide evidence that something is not quite right. With proper interpretation, these behaviors can be used to alert one of the presence of malware or an ongoing attack. It is useful to know what behavior is normal and what is not. Known-good Behavior Describing abnormal behavior is perhaps simpler than describing normal behavior, but it is possible to develop a performance baseline for a system that can be used to identity operations that fall outside of the normal. A baseline is a reference point that is defined and captured to be used as a future reference. While capturing baselines is important, using baselines to assess the security state is just as important. Even the most comprehensive baselines are useless if they are never used. Baselines alone, however, cannot help you if you do not have current benchmarks for comparison. A benchmark, which is a point of reference later used for comparison, captures the same data as a baseline and can even be used as a new baseline should the need arise. A benchmark is compared to the baseline to determine whether any security or performance issues exist. Also, security professionals should keep in mind that monitoring performance and capturing baselines and benchmarks will affect the performance of the systems being monitored. Capturing both a baseline and a benchmark at the appropriate time is important. Baselines should be captured when a system is properly configured and fully updated. Also, baselines should

be assessed over a longer period of time, such as a week or a month rather than just a day or an hour. When updates occur, new baselines should be captured and compared to the previous baselines. At that time, adopting new baselines on the most recent data might be necessary. Let’s look at an example. Suppose that your company’s security and performance network has a baseline for each day of the week. When the baselines were first captured, you noticed that much more authentication occurs on Thursdays than on any other day of the week. You were concerned about this until you discovered that members of the sales team work remotely on all days but Thursday and rarely log in to the authentication system when they are not working in the office. For their remote work, members of the sales team use their laptops and log in to the VPN only when remotely submitting orders. On Thursday, the entire sales team comes into the office and works on local computers, ensuring that orders are being processed and fulfilled as needed. The spike in authentication traffic on Thursday is fully explained by the sales team’s visit. On the other hand, if you later notice a spike in VPN traffic on Thursdays, you should be concerned because the sales team is working in the office on Thursdays and will not be using the VPN. For software developers, understanding baselines and benchmarks also involves understanding thresholds, which ensure that security issues do not progress beyond a configured level. If software developers must develop measures to notify system administrators prior to a security incident occurring, the best method is to configure the software to send an alert, alarm, or e-mail message when specific incidents pass the threshold. Security professionals should capture baselines over different times of day and days of the week to ensure that they can properly recognize when possible issues occur. In addition,

security professionals should ensure that they are comparing benchmarks to the appropriate baseline. Comparing a benchmark from a Monday at 9 a.m. to a baseline from a Saturday at 9 a.m. may not allow you to properly assess the situation. Once you identify problem areas, you should develop a possible solution to any issue that you discover. Anomalous Behavior When an application is behaving strangely and not operating normally, it could be that the application needs to be reinstalled or that it has been compromised by malware in some way. While all applications occasionally have issues, persistent issues or issues that are typically not seen or have never been seen could indicate a compromised application:

Introduction of new accounts: Some applications have their own account database. In that case, you may find accounts that didn’t previously exist in the database, which should be a cause for alarm and investigation. Many application compromises create accounts with administrative access for the use of a malicious individual or for the processes operating on his behalf. Unexpected output: When the output from a program is not what is normally expected and when dialog boxes are altered or the order in which the boxes are displayed is not correct, it is an indication that the application has been altered. Reports of strange output should be investigated. Unexpected outbound communication: Any unexpected outbound traffic should be investigated, regardless of whether it was discovered as a result of network monitoring or as a result of monitoring the host or application. With regard to the application, it can mean that data is being transmitted back to the malicious individual. Service interruption: When an application stops functioning with no apparent problem, or when an application cannot seem to communicate in the case of a distributed application, it can be a sign of a compromised application. Any such interruptions that

cannot be traced to an application, host, or network failure should be investigated. Memory overflows: Memory overflow occurs when an application uses more memory than the operating system has assigned to it. In some cases, it simply causes the system to run slowly, as the application uses more and more memory. In other cases, the issue is more serious. When it is a buffer overflow, the intent may be to crash the system or execute commands.

Exploit Techniques Endpoints such as desktops, laptops, printers, and smartphones account for the highest percentage of devices on the network. They are therefore common targets. These devices are subject to a number of security issues, as discussed in the following sections. Social Engineering Threats

Social engineering attacks occur when attackers use believable language to exploit user gullibility to obtain user credentials or some other confidential information. Social engineering threats that you should understand include phishing/pharming, shoulder surfing, identity theft, and dumpster diving. The best countermeasure against social engineering threats is to provide user security awareness training. This training should be required and must occur on a regular basis because social engineering techniques evolve constantly. The following are the most common social engineering threats:

Phishing/pharming: Phishing is a social engineering attack using e-mail in which attackers try to learn personal information, including credit card information and financial data. This type of attack is usually carried out by implementing a fake website that very closely resembles a legitimate website. Users enter data, including credentials, on the fake website, allowing the attackers to capture any information entered. Spear phishing is a phishing

attack carried out against a specific target by learning about the target’s habits and likes. Spear phishing attacks take longer to carry out than phishing attacks because of the information that must be gathered. Pharming is similar to phishing, but pharming actually pollutes the contents of a computer’s DNS cache so that requests to a legitimate site are actually routed to an alternate site. Caution users against using any links embedded in e-mail messages, even if a message appears to have come from a legitimate entity. Users should also review the address bar any time they access a site where their personal information is required, to ensure that the site is correct and that SSL is being used, which is indicated by an HTTPS designation at the beginning of the URL address. Shoulder surfing: Occurs when an attacker watches a user enter login or other confidential data. Encourage users to always be aware of who is observing their actions. Implementing privacy screens helps ensure that data entry cannot be recorded. Identity theft: Occurs when someone obtains personal information, including driver’s license number, bank account number, and Social Security number, and uses that information to assume an identity of the individual whose information was stolen. After the identity is assumed, the attack can go in any direction. In most cases, attackers open financial accounts in the users name. Attackers also can gain access to the user’s valid accounts. Dumpster diving: Occurs when attackers examine garbage contents to obtain confidential information. This includes personnel information, account login information, network diagrams, and organizational financial data. Organizations should implement policies for shredding documents that contain this information. Rogue Endpoints

As if keeping up with the devices you manage is not enough, you also have to concern yourself with the possibility of rogue devices in the networks. Rogue endpoints are devices that are present that you do not control or manage. In some cases, these devices are benign, as in the case of a user bringing his

son’s laptop to work and putting it on the network. In other cases, rogue endpoints are placed by malicious individuals. Rogue Access Points

Rogue access points are APs that you do not control and manage. There are two types: those that are connected to your wired infrastructure and those that are not. The ones that are connected to your wired network present a danger to your wired and wireless networks. They may be placed there by your own users without your knowledge, or they may be purposefully put there by a hacker to gain access to the wired network. In either case, they allow access to your wired network. Wireless intrusion prevention system (WIPS) devices can be used to locate rogue access points and alert administrators to their presence. Wireless site surveys can also be conducted to detect such threats. Servers

While servers represent a less significant number of devices than endpoints, they usually contain the critical and sensitive assets and perform mission-critical services for the network. Therefore, these devices receive the lion’s share of attention from malicious individuals. The following are some issues that can impact any device but that are most commonly directed at servers:

DoS/DDoS: A denial-of-service (DoS) attack occurs when attackers flood a device with enough requests to degrade the performance of the targeted device. Some popular DoS attacks include SYN floods and teardrop attacks. A distributed DoS (DDoS) attack is a DoS attack that is carried out from multiple attack locations. Vulnerable devices are infected with software agents called zombies. The vulnerable devices become a botnet, which then carries out the attack. Because of the distributed nature of the

attack, identifying all the attacking bots is virtually impossible. The botnet also helps hide the original source of the attack. Buffer overflow: Buffers are portions of system memory that are used to store information. A buffer overflow occurs when the amount of data that is submitted to an application is larger than the buffer can handle. Typically, this type of attack is possible because of poorly written application or operating system code, and it can result in an injection of malicious code. To protect against this issue, organizations should ensure that all operating systems and applications are updated with the latest service packs and patches. In addition, programmers should properly test all applications to check for overflow conditions. Finally, programmers should use input validation to ensure that the data submitted is not too large for the buffer. Mobile code: Mobile code is any software that is transmitted across a network to be executed on a local system. Examples of mobile code include Java applets, JavaScript code, and ActiveX controls. Mobile code includes security controls, Java implements sandboxes, and ActiveX uses digital code signatures. Malicious mobile code can be used to bypass access controls. Organizations should ensure that users understand the security concerns related to malicious mobile code. Users should only download mobile code from legitimate sites and vendors. Emanations: Emanations are electromagnetic signals that are emitted by an electronic device. Attackers can target certain devices or transmission media to eavesdrop on communication without having physical access to the device or medium. The TEMPEST program, initiated by the United States and United Kingdom, researches ways to limit emanations and standardizes the technologies used. Any equipment that meets TEMPEST standards suppresses signal emanations using shielding material. Devices that meet TEMPEST standards usually implement an outer barrier or coating, called a Faraday cage or Faraday shield. TEMPEST devices are most often used in government, military, and law enforcement settings. Backdoor/trapdoor: A backdoor, or trapdoor, is a mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device or application. Privileged backdoor accounts are the most common type of backdoor in use today. Most established vendors no longer release devices or applications with this security issue. You should

be aware of any known backdoors in the devices or applications you manage. Services

Services that run on both servers and workstations have identities in the security system. They possess accounts called system or service accounts that are built in, and they log on when they operate, just as users do. They also possess privileges and rights, and this is why security issues come up with these accounts. These accounts typically possess many more privileges than they actually need to perform the service. The security issue is that if a malicious individual or process were able to gain control of the service, the acquired rights would be significant. Therefore, it is important to apply the concept of least privilege to these services by identifying the rights the services need and limiting the services to only those rights. A common practice has been to create a user account for the service that possesses only the rights required and set the service to log on using that account. You can do this in Windows by accessing the Log On tab in the Properties dialog box of the service, as shown in Figure 11-5. In this example, the Remote Desktop Service is set to log on as a Network Service account. To limit this account, you can create a new account either in the local machine or in Active Directory, give the account the proper permissions, and then click the Browse button, locate the account, and select it. While this is a good approach, it involves some complications. First is the difficulty of managing the account password. If the domain in which the system resides has a policy that requires a password change after 30 days and you don’t change the service account password, the service will stop running. Another complication involves the use of domain accounts. While setting a service account as a domain account eliminates

the need to create an account for the service locally on each server that runs the service, it introduces a larger security risk. If that single domain service account were compromised, the account would provide access to all servers running the service.

FIGURE 11-5 Log On Tab Fortunately, with Windows Server 2008 R2 and later systems like Windows Server 2016 and Windows Server 2019, Microsoft introduced the concept of managed service accounts. Unlike with regular domain accounts, in which administrators must reset passwords manually, the network passwords for these accounts are reset automatically. Windows Server 2012 R2 introduced the concept of group managed accounts, which allow servers to share the same managed service account; this was not possible with Server 2008 R2. The account password is managed by Windows Server domain controllers and can be

retrieved by multiple Windows Server systems in an Active Directory environment. File System The file system can present some opportunities for mischief. One of the prime targets are database servers. In many ways, the database is the Holy Grail for an attacker. It is typically where the sensitive information resides. When considering database security, you need to understand the following terms:

Inference: Inference occurs when someone has access to information at one level that allows her to infer information about another level. The main mitigation technique for inference is polyinstantiation, which is the development of a detailed version of an object from another object using different values in the new object. It prevents low-level database users from inferring the existence of higher-level data. Aggregation: Aggregation is defined as the assembling or compilation of units of information at one sensitivity level and having the resultant totality of data being of a higher sensitivity level than the individual components. So you might think of aggregation as a different way of achieving the same goal as inference, which is to learn information about data on a level to which one does not have access. Contamination: Contamination is the intermingling or mixing of data of one sensitivity or need-to-know level with that of another. Proper implementation of security levels is the best defense against these problems. Data mining warehouse: A data mining warehouse is a repository of information from heterogeneous databases. It allows for multiple sources of data to not only be stored in one place but to be organized in such a way that redundancy of data is reduced (called data normalizing). More sophisticated data mining tools are used to manipulate the data to discover relationships that may not have been apparent before. Along with the benefits they provide, they also offer more security challenges.

File Integrity Monitoring Many times, malicious software and malicious individuals make unauthorized changes to files. In many cases these files are data files, and in other cases they are system files. While alterations to data files are undesirable, changes to system files can compromise an entire system. The solution is file integrity software that generates a hash value of each system file and verifies that hash value at regular intervals. This entire process is automated, and in some cases a corrupted system file will automatically be replaced when discovered. While there are third-party tools such as Tripwire that do this, Windows offers System File Checker (SFC) to do the same thing. SFC is a command-line utility that checks and verifies the versions of system files on a computer. If system files are corrupted, SFC replaces the corrupted files with correct versions. The syntax for the SFC command is as follows: SFC [switch]

The switches vary a bit between different versions of Windows. Table 11-3 lists the most common ones available for SFC.

Table 11-3 SFC Switches

Switch

Purpose

/CACHESIZE=X

Sets the Windows File Protection cache size, in megabytes

/PURGECACHE

Purges the Windows File Protection cache and scans all protected system files

immediately /REVERT

Reverts SFC to its default operation

/SCANFILE (Windows 7 and Vista only)

Scans a file that you specify and fixes problems if they are found

/SCANNOW

Immediately scans all protected system files

/SCANONCE

Scans all protected system files once

/SCANBOOT

Scans all protected system files every time the computer is rebooted

/VERIFYONLY

Scans protected system files and does not make any repairs or changes

/VERIFYFILE

Identifies the integrity of the file specified, and makes any repairs or changes

/OFFBOOTDIR

Does a repair of an offline boot directory

/OFFFWINDIR

Does a repair of an offline Windows directory

User and Entity Behavior Analytics (UEBA) Behavioral analysis is another term for anomaly analysis. It also observes network behaviors for anomalies. It can be implemented using combinations of the scanning types already covered, including NetFlow, protocol, and packet analysis to create a baseline and subsequently report departures from the traffic metrics found in the baseline. One of the newer advances

in this field is the development of user and entity behavior analytics (UEBA). This type of analysis focuses on user activities. Combining behavior analysis with machine learning, UEBA enhances the ability to determine which particular users are behaving oddly. An example would be a hacker who has stolen credentials of a user and is identified by the system because he is not performing the same activities that the user would perform.

NETWORK Sometimes our focus is not on endpoints or on individual application behavior, but on network activity. Let’s look at some types of analysis that relate to network traffic. Uniform Resource Locator (URL) and Domain Name System (DNS) Analysis Malicious individuals can make use of both DNS records and URLs to redirect network traffic in a way that benefits them. Also, some techniques used to shorten URLs (to make them less likely to malfunction) have resulted in the following: Allowing spammers to sidestep spam filters as domain names like TinyURL are automatically trusted Preventing educated users from checking for suspect URLs by obfuscating the actual website URL Redirecting users to phishing sites to capture sensitive personal information Redirecting users to malicious sites loaded with drive-by droppers, just waiting to download malware

Tools that can be used to analyze URLs include the following: urlQuery is a free online service for testing and analyzing URLs, helping with identification of malicious content on websites.

URLVoid is a free service developed by NoVirusThanks Company that allows users to scan a website address (such as google.com or youtube.com) with multiple website reputation engines and domain blacklists to facilitate the detection of possible dangerous websites.

DNS Analysis DNS provides a hierarchical naming system for computers, services, and any resources connected to the Internet or a private network. You should enable Domain Name System Security Extensions (DNSSec) to ensure that a DNS server is authenticated before the transfer of DNS information begins between the DNS server and client. Transaction Signature (TSIG) is a cryptographic mechanism used with DNSSEC that allows a DNS server to automatically update client resource records if their IP addresses or hostnames change. The TSIG record is used to validate a DNS client. As a security measure, you can configure internal DNS servers to communicate only with root servers. When you configure internal DNS servers to communicate only with root servers, the internal DNS servers are prevented from communicating with any other external DNS servers. The Start of Authority (SOA) contains the information regarding a DNS zone’s authoritative server. A DNS record’s Time to Live (TTL) determines how long a DNS record will live before it needs to be refreshed. When a record’s TTL expires, the record is removed from the DNS cache. Poisoning the DNS cache involves adding false records to the DNS zone. If you use a longer TTL, the resource record is read less frequently and therefore is less likely to be poisoned. Let’s look at a security issue that involves DNS. Suppose an IT administrator installs new DNS name servers that host the company mail exchanger (MX) records and resolve the web server’s public address. To secure the zone transfer between the

DNS servers, the administrator uses only server ACLs. However, any secondary DNS servers would still be susceptible to IP spoofing attacks. Another scenario could occur when a security team determines that someone from outside the organization has obtained sensitive information about the internal organization by querying the company’s external DNS server. The security manager should address the problem by implementing a split DNS server, allowing the external DNS server to contain only information about domains that the outside world should be aware of and enabling the internal DNS server to maintain authoritative records for internal systems. Domain Generation Algorithm A domain generation algorithm (DGA) is used by attackers to periodically generate large numbers of domain names that can be used as rendezvous points with their command and control servers. Detection efforts consist of using cumbersome blacklists that must be updated often. Figure 11-6 illustrates the use of a DGA.

FIGURE 11-6 Domain Generation Algorithm

Flow Analysis To protect data during transmission, security practitioners should identify confidential and private information. Once this data has been properly identified, the following flow analysis steps should occur: Step 1. Determine which applications and services access the information. Step 2. Document where the information is stored. Step 3. Document which security controls protect the stored information. Step 4. Determine how the information is transmitted. Step 5. Analyze whether authentication is used when accessing the information. If it is, determine whether the authentication information is securely transmitted. If it is not, determine whether authentication can be used. Step 6. Analyze enterprise password policies, including password length, password complexity, and password expiration. Step 7. Determine whether encryption is used to transmit data. If it is, ensure that the level of encryption is appropriate and that the encryption algorithm is adequate. If it is not, determine whether encryption can be used. Step 8. Ensure that the encryption keys are protected. Security practitioners should adhere to the defense-in-depth principle to ensure that the CIA of data is ensured across its entire life cycle. Applications and services should be analyzed to determine whether more secure alternatives can be used or whether inadequate security controls are deployed. Data at rest may require encryption to provide full protection and

appropriate ACLs to ensure that only authorized users have access. For data transmission, secure protocols and encryption should be employed to prevent unauthorized users from being able to intercept and read data. The most secure level of authentication possible should be used in the enterprise. Appropriate password and account policies can protect against possible password attacks. Finally, security practitioners should ensure that confidential and private information is isolated from other information, including locating the information on separate physical servers and isolating data using virtual LANs (VLANs). Disable all unnecessary services, protocols, and accounts on all devices. Make sure that all firmware, operating systems, and applications are kept up to date, based on vendor recommendations and releases. When new technologies are deployed based on the changing business needs of the organization, security practitioners should be diligent to ensure that they understand all the security implications and issues with the new technology. Deploying a new technology before proper security analysis has occurred can result in security breaches that affect more than just the newly deployed technology. Remember that changes are inevitable! How you analyze and plan for these changes is what will set you apart from other security professionals. NetFlow Analysis NetFlow is a technology developed by Cisco that is supported by all major vendors and can be used to collect and subsequently export IP traffic accounting information. The traffic information is exported using UDP packets to a NetFlow analyzer, which can organize the information in useful ways. It exports records of individual one-way transmissions called flows. When NetFlow is configured on a router interface, all

packets that are part of the same flow share the following characteristics: Source MAC address Destination MAC address IP source address IP destination address Source port Destination port Layer 3 protocol type Class of service Router or switch interface

Figure 11-7 shows the types of questions that can be answered by using the NetFlow information. When the flow information is received by the analyzer, it is organized and can then be used to identify the following: The top protocols in use The top talkers in the network Traffic patterns throughout the day

In the example in Figure 11-8, the SolarWinds NetFlow Traffic Analyzer displays the top talking endpoints over the past hour.

FIGURE 11-7 Using NetFlow Data

Figure 11-8 NetFlow Data

There a number of tools that can be used to perform flow analysis. Many of these tools are discussed in the next section. Packet and Protocol Analysis Point-in-time analysis captures data over a specified period of time and thus provides a snapshot of the situation at that point in time or across the specified time period. The types of analysis described in this section involve capturing the information and then analyzing it. Although these types of analysis all require different tools or processes, they all follow this paradigm. Packet Analysis Packet analysis examines an entire packet, including the payload. Its subset, protocol analysis, described next, is concerned only with the information in the header of the packet. In many cases, payload analysis is done when issues cannot be resolved by observing the header. While the header is only concerned with the information used to get the packet from its source to its destination, the payload is the actual data being communicated. When performance issues are occurring, and there is no sign of issues in the header, looking into the payload may reveal error messages related to the application in use that do not present in the header. From a security standpoint, examining the payload can reveal data that is unencrypted that should be encrypted. It also can reveal sensitive information that should not be leaving the network. Finally, some attacks can be recognized by examining the application commands and requests within the payload. Protocol Analysis As you just learned, protocol analysis is a subset of packet analysis, and it involves examining information in the header of a packet. Protocol analyzers examine these headers for information such as the protocol in use and details involving the communication process, such as source and destination IP

addresses and source and destination MAC addresses. From a security standpoint, these headers can also be used to determine whether the communication rules of the protocol are being followed. Malware The handling of malware was covered earlier in this chapter and is covered further in Chapter 12.

LOG REVIEW While automated systems can certainly make log review easier, these tools are not available to all cybersecurity analysts, and they do not always catch everything. In some cases, manual log review must still be done. The following sections look at how log analysis is performed in the typical logs that relate to security. Event Logs Event logs can include security events, but other types of event logs exist as well. Figure 11-9 shows the Windows System log, which includes operating system events. The view has been filtered to show only error events. Error messages indicate that something did not work, warnings indicate a lesser issue, and informational events are normal operations.

Figure 11-9 System Log in Event Viewer System logs record regular system events, including operating system and service events. Audit and security logs record successful and failed attempts to perform certain actions and require that security professionals specifically configure the actions that are audited. Organizations should establish policies regarding the collection, storage, and security of these logs. In most cases, the logs can be configured to trigger alerts when certain events occur. In addition, these logs must be periodically and systematically reviewed. Cybersecurity analysts should be trained on how to use these logs to detect when incidents have occurred. Having all the information in the world is no help if personnel do not have the appropriate skills to analyze it. For large enterprises, the amount of log data that needs to be analyzed can be quite large. For this reason, many organizations implement a SIEM device, which provides an automated solution for analyzing events and deciding where the attention needs to be given. Suppose an intrusion detection system (IDS) logged an attack attempt from a remote IP address. One week later, the attacker successfully compromised the network. In this case, it is likely that no one was reviewing the IDS event logs. Consider another example of insufficient logging and mechanisms for review. Say that an organization did not know its internal financial databases were compromised until the attacker published sensitive portions of the database on several popular attacker websites. The organization was unable to determine when, how, or who conducted the attacks but rebuilt, restored, and updated the compromised database server to continue operations. If the organization is unable to determine these specifics, it needs to look at the configuration of its system, audit, and security logs. Syslog

Syslog is a protocol that can be used to collect logs from devices and store them in a central location called a Syslog server. Syslog provides a simple framework for log entry generation, storage, and transfer that any OS, security software, or application could use if designed to do so. Many log sources either use Syslog as their native logging format or offer features that allow their logging formats to be converted to Syslog format. Syslog messages all follow the same format because they have, for the most part, been standardized. The Syslog packet size is limited to 1024 bytes and carries the following information:

Facility: The source of the message. The source can be the operating system, the process, or an application. Severity: Rated using the following scale: 0 Emergency: System is unusable. 1 Alert: Action must be taken immediately. 2 Critical: Critical conditions. 3 Error: Error conditions. 4 Warning: Warning conditions. 5 Notice: Normal but significant conditions. 6 Informational: Informational messages. 7 Debug: Debug-level messages. Source: The log from which this entry came. Action: The action taken on the packet. Source: The source IP address and port number. Destination: The destination IP address and port number.

Each Syslog message has only three parts. The first part specifies the facility and severity as numeric values. The second part of the message contains a timestamp and the hostname or IP address of the source of the log. The third part is the actual log message, with content as shown here: Click here to view code image seq no:timestamp: %facility-severityMNEMONIC:description

In the following sample Syslog message, generated by a Cisco router, no sequence number is present (it must be enabled), the timestamp shows 47 seconds since the log was cleared, the facility is LINK (an interface), the severity is 3, the type of event is UP/DOWN, and the description is “Interface GigabitEthernet0/2, changed state to up”: Click here to view code image 00:00:47: %LINK-3-UPDOWN: Interface GigabitEthernet0/2, changed state to up

This example is a locally generated message on the router and not one sent to a Syslog server. When a message is sent to the Syslog server, it also includes the IP address of the device sending the message to the Syslog server. Figure 11-10 shows some output from a Syslog server that includes this additional information.

Figure 11-10 Syslog Server The following is a standard Syslog message, and its parts are explained in Table 11-4: Click here to view code image *May 1 23:02:27.143: %SEC-6-IPACCESSLOGP: list ACL-IPv4E0/0-IN permitted tcp 192.168.1.3(1026) -> 192.168.2.1(80), 1 packet

While Syslog message formats differ based on the device and the type of message, this is a typical format of security-related message.

Table 11-4 Parts of a Standard Syslog Message

Time/d ay

*May 1 23:02:27.143

Facility

%SEC ( security)

Severit y

6 Informational: Informational messages

Source

IPACCESSLOGP: list ACL-IPv4-E0/0-IN (name of access list)

Action

Permitted

From

192.168.1.3 port 1026

To

192.168.2.1 port 80

Amoun t

1 packet

No standard fields are defined within the message content; it is intended to be human readable, and not easily machine parsable. This provides very high flexibility for log generators, which can place whatever information they deem important within the content field, but it makes automated analysis of the log data very challenging. A single source may use many different formats for its log message content, so an analysis program needs to be familiar with each format and should be able to extract the meaning of the data from the fields of each format. This problem becomes much more challenging when log messages are generated by many sources. It might not be feasible to understand the meaning of all log messages, and analysis might be limited to keyword and pattern searches. Some organizations design their Syslog infrastructures so that similar types of messages are grouped together or assigned similar codes, which can make log analysis automation easier to perform. As log security has become a greater concern, several implementations of Syslog have been created that place a greater emphasis on security. Most have been based on IETF’s RFC 3195, which was designed specifically to improve the security of Syslog. Implementations based on this standard can support log confidentiality, integrity, and availability through several features, including reliable log delivery, transmission confidentiality protection, and transmission integrity protection and authentication. Kiwi Syslog Server Kiwi Syslog Server is log management software that provides centralized storage of log data and SNMP data from hosts and appliances, based on Windows or Linux. While Kiwi combines

the functions of SNMP collector and log manager, it lacks many of the features found in other systems; however, it is very economical. Firewall Logs Examining a firewall log can be somewhat daunting at first. But if you understand the basic layout and know what certain acronyms stand for, you can usually find your way around a firewall log. The following are some examples of common firewalls. Windows Defender Windows operating system includes the Windows Defender Firewall. The default path for the log is %windir%\system32\logfiles\firewall\pfirewall.log. Figure 11-11 shows the Windows Defender Firewall with Advanced Security interface.

Figure 11-11 Windows Defender Interface Cisco Check Point A Check Point log (Cisco) follows this format: Click here to view code image

Time | Action | Firewall | Interface | Product| Source | Source Port | Destination | Service | Protocol | Translation | Rule

Note These fields are used when allowing or denying traffic. Other actions, such as a change in an object, use different fields that are beyond the scope of this discussion.

Table 11-5 shows the meaning of each field.

Table 11-5 Check Point Firewall Fields

Fie ld

Meaning

T i m e

Local time on the management station.

A c t i o n

Accept, deny, or drop. Accept means accept or pass the packet, deny means send TCP reset or ICMP port unreachable message, and drop means drop packet with no error to the sender.

F i r e w a l l

IP address or hostname of the enforcement point.

I n t e r f a c e

Firewall interface on which the packet was seen.

P r o d u c t

Firewall software running on the system that generated the message.

S o u r c e

Source IP address of packet sender.

D e s t i n a t i o n

Destination IP address of packet.

S e r v

Destination port or service of packet.

i c e P r o t o c o l

Usually a Layer 4 protocol of packet (TCP, UDP, and so on).

T r a n s l a t i o n

The new source or destination address. (This only shows if NAT is occurring.)

R u l e

Rule number from the GUI rule base that caught this packet and caused the log entry. (This should be the last field, regardless of the presence or absence of other fields except for resource messages.)

This is what a line from the log might look like: Click here to view code image 14:55:20 accept bd.pearson.com >eth1 product VPN-1 & Firewall-1 src 10.5.5.1 s_port 4523 dst xx.xxx.10.2 service http proto tcp xlatesrc xxx.xxx.146.12 rule 15

This is a log entry for permitted HTTP traffic sourced from inside (eth1) with NAT. Table 11-6 describes the meanings of the fields. Table 11-6 Firewall Log Entry Field Meanings

Field

Meaning

Time

14:55:20

Action

accept

Firewall

bd.pearson.com

Interface

eth1

Product

VPN-1 & Firewall-1

Source

10.5.5.1 port 4523

Destination

xx.xxx.10.2

Service

http

Protocol

tcp

Translation

to xxx.xxx.146.12

Rule

rule 15

While other logs may be slightly different, if you understand the examples shown here, you should be able to figure them out pretty quickly.

Web Application Firewall (WAF) A web application firewall (WAF) applies rule sets to an HTTP conversation and examines all web input before processing. These rule sets cover common attack types to which these session types are susceptible. Among the common attacks they address are cross-site scripting and SQL injections. A WAF can be implemented as an appliance or as a server plug-in. In appliance form, a WAF is typically placed directly behind the firewall and in front of the web server farm; Figure 11-12 shows an example.

Figure 11-12 Placement of a WAF While all traffic is usually funneled inline through the device, some solutions monitor a port and operate out-of-band. Table 11-7 lists the pros and cons of these two approaches. Finally, WAFs can be installed directly on the web servers themselves. The security issues involved with WAFs include the following: The IT infrastructure becomes more complex. Training on the WAF must be provided with each new release of the web application. Testing procedures may change with each release.

False positives may occur and can have a significant business impact. Troubleshooting becomes more complex. The WAF terminating the application session can potentially have an effect on the web application.

Table 11-7 Advantages and Disadvantages of WAF Placement Options

Type Inline

Advantages Can prevent live attacks

Disadvantages May slow web traffic Could block legitimate traffic

Out-ofband

Nonintrusive

Can’t block live traffic

Doesn’t interfere with traffic

An example pf a WAF log file is shown in Figure 11-13. In it you can see a number of entries regarding a detected threat attempting code tampering.

Figure 11-13 WAF Log File

Proxy Proxy servers can be appliances, or they can be software that is installed on a server operating system. These servers act like a proxy firewall in that they create the web connection between systems on their behalf, but they can typically allow and disallow traffic on a more granular basis. For example, a proxy server may allow the Sales group to go to certain websites while not allowing the Data Entry group access to those same sites. The functionality extends beyond HTTP to other traffic types, such as FTP traffic. Proxy servers can provide an additional beneficial function called web caching. When a proxy server is configured to provide web caching, it saves a copy of all web pages that have been delivered to internal computers in a web cache. If any user requests the same page later, the proxy server has a local copy and need not spend the time and effort to retrieve it from the Internet. This greatly improves web performance for frequently requested page. Figure 11-14 shows a view of a proxy server log. This is from the Proxy Server CCProxy for Internet Monitoring. This view shows who is connected and what they are doing.

Figure 11-14 Proxy Server Log Intrusion Detection System (IDS)/Intrusion Prevention System (IPS) An intrusion detection system (IDS) creates a log of every event that occurs. An intrusion prevention system (IPS) goes one step further and can take actions to stop an intrusion. Figure 11-15 shows output from an IDS. In the output, you can see that for each intrusion attempt, the source and destination IP addresses and port numbers are shown, along with a description of the type of intrusion. In this case, all the alerts have been generated by the same source IP address. Because this is a private IP address, it is coming from inside your network. It could be a malicious individual, or it could be a compromised host under the control of external forces. As a cybersecurity analyst, you should either block that IP address or investigate to find out who has that IP address.

FIGURE 11-15 IDS Log While the logs are helpful, one of the real values of an IDS is its ability to present the data it collects in meaningful ways in reports. For example, Figure 11-16 shows a pie chart created to show the intrusion attempts and the IP addresses from which the intrusions were sourced.

Figure 11-16 IDS Report Showing Blocked Intrusions by Sources Sourcefire

Sourcefire (now owned by Cisco) created products based on Snort (covered in the next section). The devices Sourcefire created were branded as Firepower appliances. These products were next-generation IPSs (NGIPSs) that provided network visibility into hosts, operating systems, applications, services, protocols, users, content, network behavior, and network attacks and malware. Sourcefire also included integrated application control, malware protection, and URL filtering. Figure 11-17 shows the Sourcefire Defense Center displaying the numbers of events in the last hour in a graph. All the services provided by these products are now incorporated into Cisco firewall products. For more information on Sourcefire, see https://www.cisco.com/c/en/us/services/acquisitions/sourcefi re.html.

Figure 11-17 Sourcefire Snort Snort is an open source NIDS on which Sourcefire products are based. It can be installed on Fedora, CentOS, FreeBSD, and Windows. The installation files are free, but you need a subscription to keep rule sets up to data. Figure 11-18 shows a Snort report that has organized the traffic in the pie chart by

protocol. It also lists all events detected by various signatures that have been installed. If you scan through the list, you can see attacks such as URL host spoofing, oversized packets, and, in row 10, a SYN FIN scan.

FIGURE 11-18 Snort Zeek Zeek is another open source NIDS. It is only supported on Unix/Linux platforms. It is not as user friendly as Snort in that configuring it requires more expertise. Like many other open source products, it is supported by a nonprofit organization called the Software Freedom Conservancy. HIPS A host-based IPS (HIPS) monitors traffic on a single system. Its primary responsibility is to protect the system on which it is installed. HIPSs typically work closely with anti-malware products and host firewall products. They generally monitor the

interaction of sites and applications with the operating system and stop any malicious activity or, in some cases, ask the user to approve changes that the application or site would like to make to the system. An example of a HIPS is SafenSoft SysWatch.

IMPACT ANALYSIS When the inevitable security event occurs, especially if it results in a successful attack, the impact of the event must be determined. Impact analysis must be performed on several levels to yield useful information. In Chapter 15, “The Incident Response Process,” and Chapter 16, “Applying the Appropriate Incident Response Procedure,” you will learn more about the incident response process, but for now understand that the purpose of an impact analysis is to Identify what systems were impacted Determine what role the quality of the response played in the severity of the issue For the future, associate the attack type with the systems that were impacted

Organization Impact vs. Localized Impact Always identify the boundaries of the attack or issue if possible. This may result in a set of impacts that affected one small area or environment while another set of issues may have impacted a larger area. Defining those boundaries helps you to anticipate the scope of a similar attack of that type in the future. You might find yourself in a scenario where one office or LAN is affected while others are not affected (localized). Even when that is the case, it could result in a wider organizational impact. For example, if a local office hosts all the database servers and the attack is local to that office, it could mean database issues for the entire organization.

Immediate Impact vs. Total Impact While many attacks cause an immediate issue, some attacks (especially some of the more serious) take weeks and months to reveal their damage. When attacks occur, be aware of such a lag in the effect and ensure that you continue to gather information that can be correlated with previous attacks. The immediate impact is what you see that alerts you, but the total impact might not be known for weeks.

SECURITY INFORMATION AND EVENT MANAGEMENT (SIEM) REVIEW For large enterprises, the amount of log data that needs to be analyzed can be quite large. For this reason, many organizations implement security information and event management (SIEM), which provides an automated solution for analyzing events and deciding where the attention needs to be given. Most SIEM products support two ways of collecting logs from log generators:

Agentless: With this type of collection, the SIEM server receives data from the individual hosts without needing to have any special software installed on those hosts. Some servers pull logs from the hosts, which is usually done by having the server authenticate to each host and retrieve its logs regularly. In other cases, the hosts push their logs to the server, which usually involves each host authenticating to the server and transferring its logs regularly. Regardless of whether the logs are pushed or pulled, the server then performs event filtering and aggregation and log normalization and analysis on the collected logs. Agent-based: With this type of collection, an agent program is installed on the host to perform event filtering and aggregation and log normalization for a particular type of log. The host then transmits the normalized log data to a SIEM server, usually on a real-time or near-real-time basis, for analysis and storage. Multiple agents may need to be installed if a host has multiple types of logs

of interest. Some SIEM products also offer agents for generic formats such as Syslog and SNMP. A generic agent is used primarily to get log data from a source for which a format-specific agent and an agentless method are not available. Some products also allow administrators to create custom agents to handle unsupported log sources.

There are advantages and disadvantages to each method. The primary advantage of the agentless approach is that agents do not need to be installed, configured, and maintained on each logging host. The primary disadvantage is the lack of filtering and aggregation at the individual host level, which can cause significantly larger amounts of data to be transferred over networks and increase the amount of time it takes to filter and analyze the logs. Another potential disadvantage of the agentless method is that the SIEM server may need credentials for authenticating to each logging host. In some cases, only one of the two methods is feasible; for example, there might be no way to remotely collect logs from a particular host without installing an agent onto it. Rule Writing One of the key issues to a successful SIEM implementation is the same issue you face with your firewall, IDS, and IPS implementations: how to capture useful and actionable information while reducing the amount of irrelevant data (noise) from the collection process. Moreover, you want to reduce the number of errors—false positives and false negatives —the SIEM system makes. To review these error types, see Chapter 3, “Vulnerability Management Activities.” The key to reducing the amount of irrelevant data (noise) and the number of errors is to write rules that guide the system in making decisions. Rules are classified by the rule type. Some example rule types are

Single event rule: If condition A happens, trigger an action. Many-to-one or one-to-many rules: If condition A happens, several scenarios are in play. Cause-and-effect rules: If condition A matches and leads to condition B, take an action. Example: “Password-guessing failure followed by successful login” type scenarios. Transitive rules or tracking rules: Here, the target in the first event (N malware infection) becomes the source in the second event (malware infection of another machine). This is typically used in worm/malware outbreak scenarios. Trending rules: Track several conditions over a time period, based on thresholds. This happens in DoS or DDoS scenarios.

Known-Bad Internet Protocol (IP) Many SIEM solutions include the capability to recognize IP addresses and domain names from which malicious traffic has been sourced in the past. IP, URL, and domain reputation data is derived from the aggregated information of all the customers of the SIEM solution. The system then prioritizes response efforts by identifying known bad actors and infected sites. This reputational data has another use as well. If your organization’s IP addresses or domains appear in a blacklist or a hacker forum, chances are that one or more of these systems have been compromised, and compromise of public systems like web servers can be just the tip of the iceberg, requiring further investigation. Dashboard SIEM products usually include support for several dozen types of log sources, such as OSs, security software, application servers (for example, web servers and e-mail servers), and even physical security control devices such as badge readers. For each supported log source type, except for generic formats such

as Syslog, the SIEM products typically know how to categorize the most important logged fields. This significantly improves the normalization, analysis, and correlation of log data over that performed by software with a less granular understanding of specific log sources and formats. Also, the SIEM software can perform event reduction by disregarding data fields that are not significant to computer security, potentially reducing the SIEM software’s network bandwidth and data storage usage. Figure 11-19 shows output from a SIEM system. Notice the various types of events that have been recorded.

FIGURE 11-19 SIEM Output The tool in Figure 11-19 shows the name or category within which each alert falls (Name column), the attacker’s address, if captured, the target IP address, and the priority of the alert (Priority column, denoted by color). Given this output, the suspicious FTP traffic (high priority) needs to be investigated. While only three are shown on this page, if you look at the topright corner, you can see that there are a total of 83 alerts with high priority, many of which are likely to be suspicious e-mail attachments. The following are examples of product dashboards:

ArcSight: ArcSight, owned by HP, sells SIEM systems that collect security log data from security technologies, operating systems, applications, and other log sources and analyze that data for signs of compromise, attacks, or other malicious activity. The solution comes in a number of models, based on the number of events the system can process per second and the number of devices supported. The selection of the model is important to ensure that the device is not overwhelmed trying to access the traffic. This solution also can generate compliance reports for HIPAA, SOX, and PCI DSS. For more information, see https://www.microfocus.com/en-us/products/siem-logmanagement/overview. QRadar: The IBM SIEM solution, QRadar, purports to help eliminate noise by applying advanced analytics to chain multiple incidents together and identify security offenses requiring action. Purchase also permits access to the IBM Security App Exchange for threat collaboration and management. For information, see https://www.ibm.com/security/security-intelligence/qradar. Splunk: Splunk is a SIEM system that can be deployed as a premises-based or cloud-based solution. The data it captures can be analyzed using searches written in Splunk Search Processing Language (SPL). Splunk uses machine-driven data imported by connectors or add-ons. For example, the Splunk add-on for Oracle Database allows a Splunk software administrator to collect and ingest data from an Oracle database server. See more at https://www.splunk.com/en_us/cyber-security.html. AlienVault/OSSIM: AlienVault (now AT&T Cybersecurity) produces both commercial and open source SIEM systems. Open Source Security Information Management (OSSIM) is the open source version, and the commercially available AlienVault Unified Security Management (USM) goes beyond traditional SIEM software with all-in-one security essentials and integrated threat intelligence. Figure 11-20 shows the Executive view of the AlienVault USM console. See more at https://cybersecurity.att.com/? utm_source=bing&utm_medium=cpc&utm_term=kwd28340254695:loc190&utm_campaign=140645415&source=EBPS0000000PSM00P &WT.srch=1&wtExtndSource=&wtpdsrchprg=AT%26T%20ABS&w tpdsrchgp=ABS_SEARCH&wtPaidSearchTerm=alienvault&wtpdsr chpcmt=alienvault&kid=kwd-28340254695:loc-

190&cid=140645415&msclkid=b2f3090f87611f60019e57f75d673cf d&utm_source=bing&utm_medium=cpc&utm_campaign=ACS_B RAND-NA-MSNSE&utm_term=alienvault&utm_content=AlienVault&gclid=CMjYv OeKpeoCFQpbgQodo70GlQ&gclsrc=ds.

Figure 11-20 AlienVault

QUERY WRITING Queries are simply questions formed and used to locate data that matches specific characteristics. Query writing is the process of forming a query that locates the information for which you are looking. Properly formed queries can help you to locate the security needle in the haystack when it comes to analyzing log data. We’ve already discussed the vast amount of information that can be collected by SIEM and other types of systems. Sigma is an open standard for writing rules that allow you to describe searches on log data in generic form. These rules can be converted and applied to many log management or SIEM systems and can even be used with grep on the command line. The following is an example of a rule written using Sigma. The

rule is named sigmac and its target is splunk. It’s looking for an event with an ID of 11. Click here to view code image $ python3 sigmac -t splunk ../rules/windows/sysmon/sysmon_quarkspw_ filedump.yml (EventID="11" TargetFileName="*\AppData\Local\Temp\ SAM-*.dmp*")

String Search String searches are used to look within a log file or data stream and locate any instances of that string. A string can be any combination of letters, numbers, and other characters. String searches are used to locate malware and to locate strings that are used in attacks or typically accompany an attack. String searches can be performed by using either search algorithms or regular expressions, but many audit tools such as SIEM (and many sniffers as well) offer GUI tools that allow you to form the search by choosing from options. Figure 11-21 shows a simple search formed in Splunk to filter out all but items including either of two strings, visudo or usermod. Script Scripts can be used to combine and orchestrate functions or to automate responses. A simple example is the following script that tests for the presence of lowercase letters in passwords and responds when no lowercase letter is present: Click here to view code image chop=$(echo "$password" | sed -E 's/:lower://g') echo "chopped to $chop" if [ "$password" == "$chop" ] ; then echo "Fail: You haven't used any lowercase letters." Fi

FIGURE 11-21 String Search in Splunk Piping Piping is the process of sending the output of one function to another function as its input. Piping is used in scripting to link together functions and orchestrate their operation. The symbol | denotes a pipe. For example, in the following Linux command, by piping the output of the cat filename command to the less command, the less command alters the display of the output: cat filename | less

Normally the output would be displayed scrolled all the way to the end of the file. The less command prevents that from occurring. Anther use of piping is to search for a set of items and then use a second function to search within that set or to perform some process on that output.

E-MAIL ANALYSIS

One of the most popular avenues for attacks is a tool we all must use every day, e-mail. This section covers several attacks that use e-mail as the vehicle. In most cases the best way to prevent these attacks is user training and awareness, because many of these attacks are based upon poor security practices on the part of the user. Email analysis is a part of security monitoring. E-mail Spoofing E-mail spoofing is the process of sending an e-mail that appears to come from one source when it really comes from another. It is made possible by altering the fields of e-mail headers such as From, Return Path, and Reply-to. Its purpose is to convince the receiver to trust the message and reply to it with some sensitive information that the receiver would not have shared unless it was a trusted message. Often this is one step in an attack designed to harvest usernames and passwords for banking or financial sites. This attack can be mitigated in several ways. One is SMTP authentication, which when enabled, disallows the sending of an e-mail by a user that cannot authenticate with the sending server. Malicious Payload E-mail is a frequent carrier of malware; in fact, e-mail is the most common vehicle for infecting computers with malware. You should employ malware scanning software on both the client machines and the e-mail server. Despite taking this measure, malware can still get through, and it is imperative to educate users to follow safe e-mail handling procedures (such as not opening attachments from unknown sources). Training users is critical. DomainKeys Identified Mail (DKIM)

DomainKeys Identified Mail (DKIM) allows you to verify the source of an e-mail. It provides a method for validating a domain name identity that is associated with a message through cryptographic authentication. Figure 11-22 shows the process. As you can see, the e-mail server verifies the domain name (actually what’s called the DKIM signature) with the DNS server first before delivering the e-mail.

Figure 11-22 DKIM Process Sender Policy Framework (SPF) Another possible mitigation technique is to implement a Sender Policy Framework (SPF). An SPF is an e-mail validation system that works by using DNS to determine whether an e-mail sent by someone has been sent by a host sanctioned by that domain’s administrator. If it can’t be validated, it is not delivered to the recipient’s box. Domain-based Message Authentication, Reporting, and Conformance (DMARC) Domain-based Message Authentication, Reporting, and Conformance (DMARC) is an e-mail authentication and reporting protocol that improves e-mail security within federal agencies. All federal agencies are required to implement this standard, which improves e-mail security. Protocols (SPF, DKIM) authenticate e-mails to ensure they are coming from a

valid source. A DMARC policy allows a sender’s domain to indicate that its e-mails are protected by SPF and/or DKIM, and tells a receiver what to do if neither of those authentication methods passes—such as to reject the message or quarantine it. Figure 11-23 illustrates a workflow whereby DMARC implements both SPF and DKIM, along with a virus filter.

Figure 11-23 DMARC Phishing Phishing is a social engineering attack in which attackers try to learn personal information, including credit card information and financial data. This type of attack is usually carried out by implementing a fake website that very closely resembles a legitimate website. Users enter data, including credentials, on the fake website, allowing the attackers to capture any information entered. As a part of assessing your environment, you should send out phishing e-mails to assess the willingness of your users to respond. A high number of successes indicates that users need training to prevent successful phishing attacks. Spear Phishing Spear phishing is the process of foisting a phishing attack on a specific person rather than a random set of people. The attack might be made more convincing by learning details about the person through social media that the e-mail might reference to boost its appearance of legitimacy. Spear phishing is carried out against a specific target by learning about the target’s habits and likes. Spear phishing attacks take longer to carry out than

phishing attacks because of the information that must be gathered. Whaling Just as spear phishing is a subset of phishing, whaling is a subset of spear phishing. It targets a single person, and in the case of whaling, that person is someone of significance or importance. It might be a CEO, COO, or CTO, for example. The attack is based on the assumption that these people have more sensitive information to divulge. Note Pharming is similar to phishing, but pharming actually pollutes the contents of a computer’s DNS cache so that requests to a legitimate site are actually routed to an alternate site.

Caution users against using any links embedded in e-mail messages, even if a message appears to have come from a legitimate entity. Users should also review the address bar any time they access a site where their personal information is required, to ensure that the site is correct and that SSL/TLS is being used, which is indicated by an HTTPS designation at the beginning of the URL address. Forwarding No one enjoys the way our e-mail boxes fill every day with unsolicited e-mails, usually trying to sell us something. In many cases we cause ourselves to receive this e-mail by not paying close attention to all the details when we buy something or visit a site. When e-mail is sent out on a mass basis that is not requested, it is called spam. Spam is more than an annoyance because it can clog e-mail boxes and cause e-mail servers to spend resources delivering it. Sending spam is illegal, so many spammers try to hide the source of the spam by relaying or forwarding through other

corporations’ e-mail servers. Not only does this practice hide the e-mail’s true source, but it can cause the relaying company to get in trouble. Today’s e-mail servers have the ability to deny relaying to any email servers that you do not specify. This can prevent your email system from being used as a spamming mechanism. This type of relaying should be disallowed on your e-mail servers. In addition, spam filters can be implemented on personal e-mail, such as web-based e-mail clients. Digital Signature A digital signature added to an e-mail is a hash value encrypted with the sender’s private key. A digital signature provides authentication, non-repudiation, and integrity. A blind signature is a form of digital signature where the contents of the message are masked before it is signed. The process for creating a digital signature is as follows:

1. The signer obtains a hash value for the data to be signed. 2. The signer encrypts the hash value using her private key. 3. The signer attaches the encrypted hash and a copy of her public key in a certificate to the data and sends the message to the receiver.

The process for verifying the digital signature is as follows: 1. The receiver separates the data, encrypted hash, and certificate. 2. The receiver obtains the hash value of the data. 3. The receiver verifies that the public key is still valid by using the PKI. 4. The receiver decrypts the encrypted hash value using the public key. 5. The receiver compares the two hash values. If the values are the same, the message has not been changed.

Public key cryptography, discussed in Chapter 8, “Security Solutions for Infrastructure Management,” is used to create digital signatures. Users register their public keys with a certificate authority (CA), which distributes a certificate containing the user’s public key and the CA’s digital signature. The digital signature is computed by the user’s public key and validity period being combined with the certificate issuer and digital signature algorithm identifier. The Digital Signature Standard (DSS) is a U.S. federal government digital security standard that governs the Digital Security Algorithm (DSA). DSA generates a message digest of 160 bits. The U.S. federal government requires the use of DSA, RSA, or Elliptic Curve DSA (ECDSA) and SHA for digital signatures. DSA is slower than RSA and provides only digital signatures. RSA provides digital signatures, encryption, and secure symmetric key distribution. In a review of cryptography, keep the following facts in mind: Encryption provides confidentiality. Hashing provides integrity. Digital signatures provide authentication, non-repudiation, and integrity.

E-mail Signature Block An e-mail signature block is a set of information, such as name, e-mail address, company title, and credentials, that usually appears at the end of an e-mail. Many organizations choose to determine the layout of the signature block and its contents to achieve consistency in how the company appears to the outside world. Another reason for this, however, is to prevent the disclosure of information that could be used at some point in time for an attack.

It is also important to control what users can put in these signature blocks to ensure that users do not inadvertently create a legal obligation on the part of the organization. Embedded Links Many times, e-mails are received that have embedded links in them. These links may appear to lead to one site based on the text you see on the page but if you scroll over the link (revealing what is called an embedded link) the actual site to which you will be directed will be shown and they may be completely different. There’s not much you can do to prevent users from clicking on these links other than train them to review the embedded links in e-mails before accessing them. Impersonation Impersonation is the process of acting as another entity and is done to gain unauthorized access to resources or networks. It can be done by adopting another’s IP address, MAC address, or user account. It can also be done at the e-mail level. E-mail can be spoofed by altering the SMTP header of the e-mail. This allows the e-mail to appear to come from another source.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 11-8 lists a reference of these key topics and the page numbers on which each is found.

Table 11-8 Key Topics in Chapter 11

Key Topic Element

Description

Page Number

Figure 11-1

Trend analysis

321

Bulleted list

Endpoint protection platforms (EPPs)

322

Bulleted list

Malware types

323

Bulleted list

Virus types

324

Figure 11-2

Botnet

326

Bulleted list

Integrity checking methods

327

Bulleted list

Reverse engineering tools

328

Figure 11-3

Secure memory

331

Figure 11-4

Runtime data integrity check

331

Bulleted list

Memory-reading tools

332

Table 11-2

Runtime debugging tools

332

Bulleted list

Indicators of a compromised application

334

Bulleted list

Social engineering threats

335

Bulleted list

Server attacks

337

Bulleted list

Database security terms

339

Table 11-3

SFC switches

341

Section

Description of user and entity behavior analytics (UEBA)

341

Figure 11-6

Domain generation algorithm

344

Bulleted list

Syslog information

350

Table 11-4

Parts of a standard Syslog message

352

Table 11-5

Check Point Firewall fields

354

Section

Description of web application firewall (WAF)

355

Table 11-7

Advantages and disadvantages of WAF placement options

356

Bulleted list

SIEM collection types

362

Bulleted list

Search rule types

363

Figure 1122

DKIM process

368

Numbered lists

Processes for creating and verifying a digital signature

371

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: heuristics trend analysis NIST SP 800-128 mobile code virus worm Trojan horse logic bomb adware spyware botnet rootkit ransomware reverse engineering isolation sandboxing hashing decomposition runtime data integrity check secured memory memory dumping runtime debugging rogue endpoints rogue access points denial-of-service (DoS) attack

buffer overflow emanations backdoor/trapdoor inference aggregation contamination user and entity behavior analytics (UEBA) domain generation algorithm (DGA) flow analysis NetFlow packet analysis protocol analysis Syslog web application firewall (WAF) proxy intrusion detection system (IDS) intrusion prevention system (IPS) impact analysis security information and event management (SIEM) query writing string searches piping DomainKeys Identified Mail (DKIM) Domain-based Message Authentication Reporting, and Conformance (DMARC) Sender Policy Framework (SPF) phishing forwarding digital signature e-mail signature block embedded links impersonation

REVIEW QUESTIONS 1. ActiveX, Java, and JavaScript are examples of _______________. 2. List and define at least two types of viruses. 3. Match the following terms with their definitions.

Terms

Definitions

Rootk it

Taking something apart to discover how it works and perhaps to replicate it

Ranso mwar e

Place where it is safe to probe and analyze malware

Rever se engin eering

A set of tools that a hacker can use on a computer after he has managed to gain access and elevate his privileges to administrator

Sandb ox

Prevents or limits users from accessing their systems until they pay money

4. ____________________ is a partition designated as security-sensitive. 5. List and define at least two forms of social engineering. 6. Match the following terms with their definitions.

Terms Ema natio ns

Definitions A mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device

Buffe r overf low

Software that is transmitted across a network to be executed on a local system

Mobi le code

Electromagnetic signals that are emitted by an electronic device

Back door/ trapd oor

Occurs when the amount of data that is submitted to an application is larger than the buffer can handle

7. ______________________________ is a technology developed by Cisco that is supported by all major vendors and can be used to collect and subsequently export IP traffic accounting information. 8. List at least two parts of a Syslog message. 9. Match the following terms with their definitions.

Ter ms

Definitions

I P S

System that can alert when a security event occurs

W A F

A server, application, or appliance that acts as an intermediary for requests from clients seeking resources from servers

P r o x y

System that can take an action when a security event occurs

I D S

System that examines all web input before processing and applies rule sets to an HTTP conversation

10. ______________________ enables you to verify the source of an e-mail.

Chapter 12

Implementing Configuration Changes to Existing Controls to Improve Security This chapter covers the following topics related to Objective 3.2 (Given a scenario, implement configuration changes to existing controls to improve security) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Permissions: Discusses the importance of proper permissions management. Whitelisting: Covers the process of whitelisting and its indications. Blacklisting: Describes a blacklisting process used to deny access. Firewall: Identifies key capabilities of various firewall platforms. Intrusion prevention system (IPS) rules: Discusses rules used to automate response. Data loss prevention (DLP): Covers the DLP process used to prevent exfiltration. Endpoint detection and response (EDR): Describes a technology that addresses the need for continuous monitoring. Network access control (NAC): Identifies the processes used by NAC technology. Sinkholing: Discusses the use of this networking tool. Malware signatures: Describes the importance of malware signatures and development/rule writing.

Sandboxing: Reviews the use of this software virtualization technique to isolate apps from critical system resources. Port Security: Covers the role of port security in preventing attacks.

In many cases, security monitoring data indicates a need to change or implement new controls to address new threats. These changes might be small configuration adjustments to a security device or they might include large investments in new technology. Regardless of the scope, these actions should be driven by the threat at hand and the controls should be exposed to the same cost/benefit analysis to which all organizational activities are exposed.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these 12 self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 12-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 12-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Permissions

1

Whitelisting

2

Blacklisting

3

Firewall

4

Intrusion Prevention System (IPS) Rules

5

Data Loss Prevention (DLP)

6

Endpoint Detection and Response (EDR)

7

Network Access Control (NAC)

8

Sinkholing

9

Malware Signatures

10

Sandboxing

11

Port Security

12

1. Which of the following is an example of a right and not a permission? 1. Read access to a file 2. Ability to delete a file 3. Ability to reset passwords 4. Ability to change the permissions of a file

2. When you allow a file type at the exclusion of all other file types, you have created what? 1. Whitelist 2. Access list 3. Blacklist

4. Graylist

3. Which of the following requires the most effort to maintain? 1. Whitelist 2. Access list 3. Blacklist 4. Graylist

4. Which of the following is a category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall? 1. NGFW 2. Bastion host 3. Three-legged firewall 4. Proxy

5. Which of the following is a type of IPS and is an expert system that uses a knowledge base, an inference engine, and programming? 1. Rule-based 2. Signature-based 3. Heuristics-based 4. Error-based

6. Preventing data exfiltration is the role of which of the following? 1. Trend analysis 2. DLP 3. NAC 4. Port security

7. Which of the following shifts security from a reactive threat approach to one that can detect and prevent threats before they reach the organization? 1. NAC 2. DAC 3. EDR 4. DLP

8. Which of the following is a service that goes beyond authentication of the user and includes examination of the state of the computer the user is introducing to the network when making a remote-access or VPN connection to the network? 1. NAC 2. DAC 3. EDR 4. DLP

9. Which of the following can be used to prevent a compromised host from communicating back to the attacker? 1. Sinkholing 2. DNSSec 3. NASC 4. Port security

10. Which of the following could be a filename or could be some series of characters that can be tied uniquely to the malware? 1. Key 2. Signature

3. Fingerprint 4. Scope

11. Which of the following allows you to run a possibly malicious program in a safe environment so that it doesn’t infect the local system? 1. Sandbox 2. Secure memory 3. Secure enclave 4. Container

12. Which of the following is referred to as Layer 2 security? 1. Sandbox 2. Port security 3. Encoding 4. Subnetting

FOUNDATION TOPICS PERMISSIONS Permissions are granted or denied at the file, folder, or other object level. Common permission types include Read, Write, and Full Control. Data custodians or administrators will grant users permissions on a file or folder based on the file owner’s request to do so. Rights allow administrators to assign specific privileges and logon rights to groups or users. Rights manage who is allowed to perform certain operations on an entire computer or within a domain, rather than on a particular object within a computer. While user permissions are granted by an object’s owner, user rights are assigned using a computer’s local security policy or a

domain security policy. User rights apply to user accounts, while permissions apply to objects. Rights include the ability to log on to a system interactively, which is a logon right, or the ability to back up files, which is considered a privilege. User rights are divided into two categories: privileges and logon rights. Privileges are the right of an account, such as a user or group account, to perform various system-related operations on the local computer, such as shutting down the system, loading device drivers, or changing the system time. Logon rights control how users are allowed access to the computer, including logging on locally or through a network connection or whether as a service or as a batch job. Conflicts can occur in situations where the rights that are required to administer a system overlap the rights of resource ownership. When rights conflict, a privilege overrides a permission.

WHITELISTING AND BLACKLISTING Whitelisting occurs when a list of acceptable e-mail addresses, Internet addresses, websites, applications, or some other identifier is configured as good senders or as allowed to send. Blacklisting identifies bad senders. Graylisting is somewhere in between the two, listing entities that cannot be identified as whitelist or blacklist items. In the case of graylisting, the new entity must pass through a series of tests to determine whether it will be whitelisted or blacklisted. Whitelisting, blacklisting, and graylisting are commonly used with spam filtering tools. But there are other uses for whitelists and blacklists as well. They are used in routes to enforce ACLs and in switches to enforce port security. Application Whitelisting and Blacklisting

Application whitelists are lists of allowed applications (with all others excluded), and blacklists are lists of prohibited applications (with all others allowed). It is important to control the types of applications that users can install on their computers. Some application types can create support issues, and others can introduce malware. It is possible to use Windows Group Policy to restrict the installation of software on network computers, as illustrated in Figure 12-1. Using Windows Group Policy is only one option, and each organization should select a technology to control application installation and usage in the network.

Figure 12-1 Software Restrictions Input Validation

Input validation is the process of checking all input for things such as proper format and proper length. In many cases, these validators use either the blacklisting of characters or patterns or the whitelisting of characters or patterns. Blacklisting looks for characters or patterns to block. It can prevent legitimate requests. Whitelisting looks for allowable characters or patterns and only allows those. The length of the input should also be checked and verified to prevent buffer overflows.

FIREWALL Chapter 11, “Analyzing Data as Part of Security Monitoring Activities,” discussed firewall logs and Chapter 8, “Security Solutions for Infrastructure Management,” discussed the various architectures used in firewalls; at this point we need to look a little more closely at firewall types and their placement for effective operation. Firewalls can be software programs that are installed over server or client operating systems or appliances that have their own operating system. In either case, the job of a firewall is to inspect and control the type of traffic allowed. NextGen Firewalls Next-generation firewalls (NGFWs) are a category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall, without hampering the performance. Although unified threat management (UTM) devices also attempt to address these issues, they tend to use separate internal engines to perform individual security functions. This means a packet may be examined several times by different engines to determine whether it should be allowed into the network. NGFWs are application aware, which means they can distinguish between specific applications instead of allowing all traffic coming in via typical web ports. Moreover, they examine

packets only once, during the deep packet inspection phase (which is required to detect malware and anomalies). The following are some of the features provided by NGFWs: Nondisruptive inline configuration (which has little impact on network performance) Standard first-generation firewall capabilities, such as network address translation (NAT), stateful protocol inspection (SPI), and virtual private networking Integrated signature-based IPS engine Application awareness, full stack visibility, and granular control Ability to incorporate information from outside the firewall, such as directory-based policy, blacklists, and whitelists Upgrade path to include future information feeds and security threats and SSL/TLS decryption to enable identifying undesirable encrypted applications

An NGFW can be placed inline or out-of-path. Out-of-path means that a gateway redirects traffic to the NGFW, while inline placement causes all traffic to flow through the device. Figure 12-2 shows the two placement options for NGFWs.

FIGURE 12-2 Placement of an NGFW Table 12-2 lists the advantages and disadvantages of NGFWs.

Table 12-2 Advantages and Disadvantages of NGFWs

Advantages

Disadvantages

Provides enhanced security

Is more involved to manage than a standard firewall

Provides integration between security services

Leads to reliance on a single vendor

May save costs on appliances

Performance can be impacted

Host-Based Firewalls A host-based firewall resides on a single host and is designed to protect that host only. Many operating systems today come with host-based (or personal) firewalls. Many commercial host-based firewalls are designed to focus attention on a particular type of traffic or to protect a certain application. On Linux-based systems, a common host-based firewall is iptables, which replaces a previous package called ipchains. It has the ability to accept or drop packets. You create firewall rules much as you create an access list on a router. The following is an example of a rule set: Click here to view code image iptables -A INPUT -i eth1 -s 192.168.0.0/24 -j DROP iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP iptables -A INPUT -i eth1 -s 172. -j DROP

This rule set blocks all incoming traffic sourced from either the 192.168.0.0/24 network or the 10.0.0.0/8 network. Both of these are private IP address ranges. It is quite common to block incoming traffic from the Internet that has a private IP address as its source, as this usually indicates that IP spoofing is occurring. In general, the following IP address ranges should be blocked as traffic sourced from these ranges is highly likely to be spoofed: 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 224.0.0.0/4

240.0.0.0/5 127.0.0.0/8 The 224.0.0.0/4 range covers multicast traffic, and the 127.0.0.0/8 range covers traffic from a loopback IP address. You may also want to include the APIPA 169.254.0.0 range as well, as it is the range in which some computers give themselves IP addresses when the DHCP server cannot be reached. On a Microsoft computer, you can use Windows Defender to block these ranges. Table 12-3 lists the pros and cons of the various types of firewalls.

Table 12-3 Pros and Cons of Firewall Types

Type

Packet filteri ng firewa lls

Advantages

Disadvantages

Best performance

Cannot prevent:

IP spoofing

Attacks that are specific to an application

Attacks that depend on packet fragmentation

Attacks that take advantage of the TCP handshake

Circui t-level proxie s

Secure addresses from exposure

Slight impact on performance

Support a multiprotocol environment

May require a client on the computer (SOCKS proxy)

Allow for comprehensive logging

No application layer security Applic ationlevel proxie s

Understand the details of the communication process at Layer 7 for the application

Big impact on performance

Kernel proxy firewa lls

Inspect the packet at every layer of the OSI model

Don’t impact performance as do application layer proxies

Note Other firewalls and associated network architecture approaches were covered in Chapter 8.

INTRUSION PREVENTION SYSTEM (IPS) RULES

As you learned earlier, some IPSs can be rule-based. Chapter 3, “Vulnerability Management Activities,” and Chapter 11 covered these IPSs in more detail. Chapter 11 covered rule writing in more detail.

DATA LOSS PREVENTION (DLP) Data loss prevention (DLP) software attempts to prevent data leakage. It does this by maintaining awareness of actions that can and cannot be taken with respect to a document. For example, DLP software might allow printing of a document but only at the company office. It might also disallow sending the document through e-mail. DLP software uses ingress and egress filters to identify sensitive data that is leaving the organization and can prevent such leakage. Another scenario might be the release of product plans that should be available only to the Sales group. You could set the following policy for that document: It cannot be e-mailed to anyone other than Sales group members. It cannot be printed. It cannot be copied.

There are two locations where you can implement this policy: Network DLP: Installed at network egress points near the perimeter, network DLP analyzes network traffic. Endpoint DLP: Endpoint DLP runs on end-user workstations or servers in the organization.

You can use both precise and imprecise methods to determine what is sensitive: Precise methods: These methods involve content registration and trigger almost zero false-positive incidents.

Imprecise methods: These methods can include keywords, lexicons, regular expressions, extended regular expressions, metadata tags, Bayesian analysis, and statistical analysis.

The value of a DLP system resides in the level of precision with which it can locate and prevent the leakage of sensitive data.

ENDPOINT DETECTION AND RESPONSE (EDR) Endpoint detection and response (EDR) is a proactive endpoint security approach designed to supplement existing defenses. This advanced endpoint approach shifts security from a reactive threat approach to one that can detect and prevent threats before they reach the organization. It focuses on three essential elements for effective threat prevention: automation, adaptability, and continuous monitoring. The following are some examples of EDR products: FireEye Endpoint Security Carbon Black CB Response Guidance Software EnCase Endpoint Security Cybereason Total Enterprise Protection Symantec Endpoint Protection RSA NetWitness Endpoint

The advantage of EDR systems is that they provide continuous monitoring. The disadvantage is that the software’s use of resources could impact performance of the device.

NETWORK ACCESS CONTROL (NAC) Network access control (NAC) is a service that goes beyond authentication of the user and includes examination of the state

of the computer the user is introducing to the network when making a remote-access or VPN connection to the network. The Cisco world calls these services Network Admission Control (NAC), and the Microsoft world calls them Network Access Protection (NAP). Regardless of the term used, the goals of the features are the same: to examine all devices requesting network access for malware, missing security updates, and any other security issues the devices could potentially introduce to the network. Figure 12-3 shows the steps that occur in Microsoft NAP. The health state of the device requesting access is collected and sent to the Network Policy Server (NPS), where the state is compared to requirements. If requirements are met, access is granted.

FIGURE 12-3 NAC

The limitations of using NAC and NAP are as follows: They work well for company-managed computers but less well for guests. They tend to react only to known threats and not to new threats. The return on investment is still unproven. Some implementations involve confusing configuration.

Access decisions can be of the following types:

Time based: A user might be allowed to connect to the network only during specific times of day. Rule based: A user might have his access controlled by a rule such as “all devices must have the latest antivirus patches installed.” Role based: A user may derive her network access privileges from a role she has been assigned, typically through addition to a specific security group. Location based: A user might have one set of access rights when connected from another office and another set when connected from the Internet.

Quarantine/Remediation If you examine step 5 in the process shown in Figure 12-3, you see that a device that fails examination is placed in a restricted network until it can be remediated. A remediation server addresses the problems discovered on the device. It may remove the malware, install missing operating system updates, or update virus definitions. When the remediation process is complete, the device is granted full access to the network. Agent-Based vs. Agentless NAC NAC can be deployed with or without agents on devices. An agent is software used to control and interact with a device.

Agentless NAC is the easiest to deploy but offers less control and fewer inspection capabilities. Agent-based NAC can perform deep inspection and remediation at the expense of additional software on the endpoint. Both agent-based and agentless NAC can be used to mitigate the following issues: Malware Missing OS patches Missing anti-malware updates

802.1X Another form of network access control is 802.1X Extensible Authentication Protocol (EAP). 802.1X is a standard that defines a framework for centralized port-based authentication. It can be applied to both wireless and wired networks and uses three components: Supplicant: The user or device requesting access to the network Authenticator: The device through which the supplicant is attempting to access the network Authentication server: The centralized device that performs authentication

The role of the authenticator can be performed by a wide variety of network access devices, including remote-access servers (both dial-up and VPN), switches, and wireless access points. The role of the authentication server can be performed by a Remote Authentication Dial-in User Service (RADIUS) or Terminal Access Controller Access Control System Plus (TACACS+) server. The authenticator requests credentials from the supplicant and, upon receipt of those credentials, relays them to the authentication server, where they are validated. Upon successful verification, the authenticator is notified to

open the port for the supplicant to allow network access. Figure 12-4 illustrates this process.

Figure 12-4 802.1X Architecture While RADIUS and TACACS+ perform the same roles, they have different characteristics. These differences must be taken into consideration when choosing a method. Keep in mind also that while RADIUS is a standard, TACACS+ is Cisco proprietary. Table 12-4 compares them.

Table 12-4 RADIUS vs. TACACS+

RADIUS

TACACS+

Transp ort Protoc ol

Uses UDP, which may result in faster response

Uses TCP, which offers more information for troubleshooting

Confid entialit

Encrypts only the password in the

Encrypts the entire body of the packet but leaves a

y

access request packet

standard TACACS+ header for troubleshooting

Authen ticatio n and Author ization

Combines authentication and authorization

Separates authentication, authorization, and accounting processes

Suppor ted Layer 3 Protoc ols

Does not support any of the following:

Supports all protocols

Apple Remote Access protocol NetBIOS Frame Protocol Control protocol X.25 PAD connections

Device s

Does not support securing the available commands on routers and switches

Supports securing the available commands on routers and switches

Traffic

Creates less traffic

Creates more traffic

Among the issues 802.1X port-based authentication can help mitigate are the following: Network DoS attacks Device spoofing (because it authenticates the user, not the device)

SINKHOLING A sinkhole is a router designed to accept and analyze attack traffic. Sinkholes can be used to do the following: Draw traffic away from a target Monitor worm traffic Monitor other malicious traffic

During an attack, a sinkhole router can be quickly configured to announce a route to the target’s IP address that leads to a network or an alternate device where the attack can be safely studied. Moreover, sinkholes can also be used to prevent a compromised host from communicating back to the attacker. Finally, they can be used to prevent a worm-infected system from infecting other systems. Sinkholes can be used to mitigate the following issues: Worms Compromised devices communicating with command and control (C&C) servers External attacks targeted at a single device inside the network

MALWARE SIGNATURES While placing malware in a sandbox or isolation area for study is a safe way of reverse engineering and eventually disarming the malware, the best defense is to identify and remove malware when it enters the network before it infects the devices. To do this, network security devices such as SIEM, IPS, IDS, and firewall systems must be able to recognize the malware when it is still contained in network packets before it reaches devices. This requires identifying a malware signature. This

could be a filename or it could be some series of characters that can be tied uniquely to the malware. You learned about signature-based IPS/IDS systems earlier. You may remember that these systems and rule-based systems both rely on rules that instruct the security device to be on the lookout for certain character strings in a packet. Development/Rule Writing One of the keys to successful signature matching and therefore successful malware prevention is proper rule writing, which is in the development realm. Just as automation is driving network technicians to learn basic development theory and rule writing, so is malware signature identification. Rule creation does not always rely on the name of the malicious file. It also can be based on behavior that is dangerous in and of itself. Examples of rules or behavior that can indicate that a system is infected by malware are as follows: A system process that drops various malware executables (e.g., Dropper, a kind of Trojan that has been designed to “install” some sort of malware) A system process that reaches out to random, and often foreign, IP addresses/domains Repeated attempts to monitor or modify key system settings such as registry keys

SANDBOXING Chapter 11 briefly introduced sandboxing. You can use a sandbox to run a possibly malicious program in a safe environment so that it doesn’t infect the local system. By using sandboxing tools, you can execute malware executable files without allowing the files to interact with the local system. Some sandboxing tools also allow you to analyze the

characteristics of an executable. This is not possible with some malware because it is specifically written to do different things if it detects that it’s being executed in a sandbox. In many cases, sandboxing tools operate by sending a file to a special server that analyzes the file and sends you a report on it. Sometimes this is a free service, but in many instances it is not. Some examples of these services include the following: Sandboxie Akana Binary Guard True Bare Metal BitBlaze Malware Analysis Service Comodo Automated Analysis System and Valkyrie Deepviz Malware Analyzer Detux Sandbox (Linux binaries)

Another option for studying malware is to set up a “sheep dip” computer. This is a system that has been isolated from the other systems and is used for analyzing suspect files and messages for malware. You can take measures such as the following on a sheep dip system: Install port monitors to discover ports used by the malware. Install file monitors to discover what changes may be made to files. Install network monitors to identify what communications the malware may attempt. Install one or more antivirus programs to perform malware analysis.

Often these sheep dip systems are combined with antivirus sensor systems to which malicious traffic is reflected for analysis. The safest way to perform reverse engineering and

malware analysis is to prepare a test bed. Doing so involves the following steps:

Step 1. Install virtualization software on the host. Step 2. Create a VM and install a guest operating system on the VM. Step 3. Isolate the system from the network by ensuring that the NIC is set to “host” only mode. Step 4. Disable shared folders and enable guest isolation on the VM. Step 5. Copy the malware to the guest operating system. Also, you need isolated network services for the VM, such as DNS. It may also be beneficial to install multiple operating systems in both patched and unpatched configurations. Finally, you can make use of virtualization snapshots and reimaging tools to wipe and rebuild machines quickly. Once the test bed is set up, you also need to install a number of other tools to use on the isolated VM, including the following:

Imaging tools: You need these tools to take images for forensics and prosecution procedures. Examples include SafeBack Version 2.0 and Linux dd. File/data analysis tools: You need these tools to perform static analysis of potential malware files. Examples include PeStudio and PEframe. Registry/configuration tools: You need these tools to help identify infected settings in the registry and to identify the lastsaved settings. Examples include Microsoft’s Sysinternals Autoruns and Silent Runners.vbs. Sandbox tools: You need these tools for manual malware analysis in a safe environment.

Log analyzers: You need these tools to extract log files. Examples include AWStats and Apache Log Viewer. Network capture tools: You need these tools to understand how the malware uses the network. Examples include Wireshark and Omnipeek.

While the use of virtual machines to investigate the effects of malware is quite common, you should know that some wellwritten malware can break out of a VM relatively easily, making this approach problematic.

PORT SECURITY Port security applies to ports on a switch or wireless home router, and because it relies on monitoring the MAC addresses of the devices attached to the switch ports, it is considered to be Layer 2 security. While disabling any ports that are not in use is always a good idea, port security goes a step further and allows you to keep a port enabled for legitimate devices while preventing its use by illegitimate devices. You can apply two types of restrictions to a switch port: Restrict the specific MAC addresses allowed to send on the port. Restrict the total number of different MAC addresses allowed to send on the port.

By specifying which specific MAC addresses are allowed to send on a port, you can prevent unknown devices from connecting to the switch port. Port security is applied at the interface level. The interface must be configured as an access port, so first you ensure that it is by executing the following command: Click here to view code image Switch(config)# int fa0/1

Switch(config-if)# switchport mode access

In order for port security to function, you must enable the feature. To enable it on a switchport, use the following command at the interface configuration prompt: Click here to view code image Switch(config-if)# switchport port security

Limiting MAC Addresses Now you need to define the maximum number of MAC addresses allowed on the port. In many cases today, IP phones and computers share a switchport (the computer plugs into the phone, and the phone plugs into the switch), so here you want to allow a maximum of two: Click here to view code image Switch(config-if)# switchport port security maximum 2

Next, you define the two allowed MAC addresses, in this case, aaaa.aaaa.aaaa and bbbb.bbbb.bbbb: Click here to view code image Switch(config-if)# switchport port security mac-address aaaa.aaaa. aaaa Switch(config-if)# switchport port security mac-address bbbb.bbbb. bbbb

Finally, you set an action for the switch to take if there is a violation. By default, the action is to shut down the port. You can also set it to restrict, which doesn’t shut down the port but prevents the violating device from sending any data. In this case, set it to restrict:

Click here to view code image Switch(config-if)# switchport port security violation restrict

Now you have secured the port to allow only the two MAC addresses required by the legitimate user: one for his phone and the other for his computer. Now you just need to gather all the MAC addresses for all the phones and computers, and you can lock down all the ports. Boy, that’s a lot of work! In the next section, you’ll see that there is an easier way. Implementing Sticky MAC Sticky MAC is a feature that allows a switch to learn the MAC addresses of the devices currently connected to the port and convert them to secure MAC addresses (the only MAC addresses allowed to send on the port). All you need to do is specify the keyword sticky in the command where you designate the MAC addresses, and you’re done. You still define the maximum number, and Sticky MAC converts up to that number of addresses to secure MAC addresses. Therefore, you can secure all ports by only specifying the number allowed on each port and specifying the sticky command in the port security macaddress command. To secure a single port, execute the following code: Click here to view code image Switch(config-if)# port security Switch(config-if)# port security maximum 2 Switch(config-if)# port security mac-address sticky

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the

exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 12-5 lists a reference of these key topics and the page numbers on which each is found.

Table 12-5 Key Topics in Chapter 12

Key Topic Element

Description

Page Number

Figure 12-1

Software restrictions

382

Figure 12-2

Placement of an NGFW

384

Table 12-2

Advantages and disadvantages of NGFWs

384

Table 12-3

Pros and cons of firewall types

385

Figure 12-3

NAC

388

Bulleted list

Access decision types

388

Figure 12-4

802.1X architecture

390

Table 12-4

RADIUS vs. TACACS+

390

Section

Sinkholing

391

Step list

Preparing a test bed

393

Bulleted list

Tools for sandboxing and reverse engineering

393

Section

Port security

394

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: permissions rights whitelisting blacklisting firewalls next-generation firewalls (NGFWs) host-based firewall data loss prevention (DLP) endpoint detection and response (EDR) network access control (NAC) 802.1X supplicant authenticator authentication server sinkhole port security sticky MAC

REVIEW QUESTIONS 1. Granting someone the ability to reset passwords is the assignment of a(n) ________. 2. List at least one disadvantage of packet filtering firewalls.

3. Match the following terms with their definitions.

Term s

Definitions

Scr ee ne d su bn et

Resides on a single host and is designed to protect that host only

NG F W

Linux host-based firewall

Ho stbas ed fire wa ll

A category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall, without hampering the performance

Ipt abl es

Architecture where two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network

4. List at least two advantages of circuit-level proxies. 5. ___________________ is installed at network egress points near the perimeter, to prevent data exfiltration. 6. Match the following terms with their definitions.

Terms

Definitions

802.1X

Microsoft’s name for NAC services

Network Access Protectio n (NAP)

NAC that can perform deep inspection and remediation at the expense of additional software on the endpoint

Agentbased

Type of rule where a user might have one set of access rights when connected from another office and another set when connected from the Internet

Location -based

Defines a framework for centralized port-based authentication

7. List at least two disadvantages of RADIUS. 8. _______________ is a system that has been isolated from the other systems and is used for analyzing suspect files and messages for malware. 9. Match the following terms with their definitions.

Terms

Definitions

Imaging tools

Used to perform static analysis of potential malware files

Registry/conf iguration tools

Used to take images for forensics and prosecution procedures

File/data analysis tools

Used to understand how the malware uses the network

Packet capture tools

Used to help identify infected settings in the registry and to identify the last-saved settings

10. List at least two measures that should be taken with sheep dip systems.

Chapter 13

The Importance of Proactive Threat Hunting This chapter covers the following topics related to Objective 3.3 (Explain the importance of proactive threat hunting) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Establishing a hypothesis: Discusses the importance of this first step in threat hunting. Profiling threat actors and activities: Covers the process and kits application. Threat hunting tactics: Describes hunting techniques, including executable process analysis. Reducing the attack surface area: Identifies what constitutes the attack surface. Bundling critical assets: Discusses the reasoning behind this technique. Attack vectors: Defines various attack vectors. Integrated intelligence: Describes a technology that addresses the need for shared intelligence. Improving detection capabilities: Identifies methods for improving detection.

Threat hunting is a security approach that places emphasis on actively searching for threats rather than sitting back and waiting to react. It is sometimes referred to as offensive in nature rather than defensive. This chapter explores threat hunting and details what it involves.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these eight self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 13-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 13-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Establishing a Hypothesis

1

Profiling Threat Actors and Activities

2

Threat Hunting Tactics

3

Reducing the Attack Surface Area

4

Bundling Critical Assets

5

Attack Vectors

6

Integrated Intelligence

7

Improving Detection Capabilities

8

1. Which of the following is the first step in the scientific method? 1. Ask a question. 2. Conduct an experiment. 3. Make a conclusion. 4. Establish a hypothesis.

2. The U.S. Federal Bureau of Investigation (FBI) has identified all but which of the following categories of threat actors? 1. Hacktivists 2. Organized crime 3. State sponsors 4. Terrorist groups

3. Which of the following might identify a device that has been compromised with malware? 1. Executable process analysis 2. Regression analysis 3. Risk management 4. Polyinstantiation

4. Which of the following allows you prevent any changes to the device configuration, even by users who formerly had the right to configure the device? 1. Configuration lockdown 2. System hardening 3. NAC 4. DNSSec

5. Which of the following is a measure of how freely data can be handled? 1. Transparency 2. Sensitivity 3. Value 4. Quality

6. Which metric included in the CVSS Attack Vector metric group means that the attacker can cause the vulnerability from any network? 1. B 2. N 3. L 4. A

7. Which of the following focuses on merging cybersecurity and physical security to aid governments in dealing with emerging threats? 1. OWASP 2. NIST 3. IIC 4. PDA

8. In which step of Deming’s Plan–Do–Check–Act cycle are the results of the implementation analyzed to determine whether it made a difference? 1. Plan 2. Do 3. Check 4. Act

FOUNDATION TOPICS ESTABLISHING A HYPOTHESIS The first phase of proactive threat hunting is to establish a hypothesis about the aims and nature of a potential attack, similar to establishing a hypothesis when following the scientific method, shown in Figure 13-1. When security incidents are occurring, and even when they are not occurring at the current time, security professionals must anticipate attacks and establish a hypothesis regarding the attack aims and method as soon as possible. As you may already know from the scientific method, making an educated guess about the aims and nature of an attack is the first step. Then you conduct experiments (or gather more network data) to either prove or disprove the hypothesis. Then the process starts again with a new hypothesis if the old one has been disproved.

Figure 13-1 Scientific Method

For example, if an attacker is probing your network for unknown reasons, you might follow the method in this way: 1. Why is he doing this, what is his aim? 2. He is trying to perform a port scan. 3. Monitor and capture the traffic he sends to the network. 4. Look for the presence of packets that have been crafted by the hacker compared to those that are the result of the normal TCP three-way handshake. 5. These packet types are not present; therefore, his intent is not to port scan.

At this point another hypothesis will be suggested and the process begins again.

PROFILING THREAT ACTORS AND ACTIVITIES A threat is carried out by a threat actor. For example, an attacker who takes advantage of an inappropriate or absent ACL is a threat actor. Keep in mind, though, that threat actors can discover and/or exploit vulnerabilities. Not all threat actors will actually exploit an identified vulnerability. While you learned about basic threat actors in Chapter 1, “The Importance of Threat Data and Intelligence,” the U.S. Federal Bureau of Investigation (FBI) has identified three categories of threat actors: Organized crime groups primarily threatening the financial services sector and expanding the scope of their attacks State sponsors or advanced persistent threats (APTs), usually foreign governments, interested in pilfering data, including intellectual property and research and development data from major manufacturers, government agencies, and defense contractors Terrorist groups that want to impact countries by using the Internet and other networks to disrupt or harm the viability of a society by

damaging its critical infrastructure

While there are other, less organized groups out there, law enforcement considers these three groups to be the primary threat actors. However, organizations should not totally disregard the threats of any threat actors that fall outside these three categories. Lone actors or smaller groups that use hacking as a means to discover and exploit any discovered vulnerability can cause damage just like the larger, more organized groups. Hacker and cracker are two terms that are often used interchangeably in media but do not actually have the same meaning. Hackers are individuals who attempt to break into secure systems to obtain knowledge about the systems and possibly use that knowledge to carry out pranks or commit crimes. Crackers, on the other hand, are individuals who attempt to break into secure systems without using the knowledge gained for any nefarious purposes. Hacktivists are the latest new group to crop up. They are activists for a cause, such as animal rights, that use hacking as a means to get their message out and affect the businesses that they feel are detrimental to their cause. In the security world, the terms white hat, gray hat, and black hat are more easily understood and less often confused than the terms hackers and crackers. A white hat does not have any malicious intent. A black hat has malicious intent. A gray hat is somewhere between the other two. A gray hat may, for example, break into a system, notify the administrator of the security hole, and offer to fix the security issues for a fee. Threat actors use a variety of techniques to gather the information required to gain a foothold.

THREAT HUNTING TACTICS

Security analysts use various techniques in the process of anticipating and identifying threats. Some of these methods revolve around network surveillance and others involve examining the behaviors of individual systems. Hunt Teaming Hunt teaming is a new approach to security that is offensive in nature rather than defensive, which has been the common approach of security teams in the past. Hunt teams work together to detect, identify, and understand advanced and determined threat actors. Hunt teaming is covered in Chapter 8, “Security Solutions for Infrastructure Management.” Threat Model A threat model is a conceptual design that attempts to provide a framework on which to implement security efforts. Many models have been created. Let’s say, for example, that you have an online banking application and need to assess the points at which the application faces threats. Figure 13-2 shows how a threat model in the form of a data flow diagram might be created using the Open Web Application Security Project (OWASP) approach to identify where the trust boundaries are located. Threat modeling tools go beyond these simple data flow diagrams. The following are some recent tools:

Threat Modeling Tool (formerly SDL Threat Modeling Tool) identifies threats based on the STRIDE threat classification scheme. ThreatModeler identifies threats based on a customizable comprehensive threat library and is intended for collaborative use across all organizational stakeholders. IriusRisk offers both community and commercial versions of a tool that focuses on the creation and maintenance of a live threat

model through the entire software development life cycle (SDLC). It connects with several different tools to empower automation. securiCAD focuses on threat modeling of IT infrastructures using a computer-based design (CAD) approach where assets are automatically or manually placed on a drawing pane. SD Elements is a software security requirements management platform that includes automated threat modeling capabilities.

Figure 13-2 OWASP Threat Model Executable Process Analysis When the processor is very busy with very little or nothing running to generate the activity, it could be a sign that the processor is working on behalf of malicious software. This is one of the key reasons any compromise is typically accompanied by a drop in performance. Executable process analysis allows you to determine this. While Task Manager in Windows is designed to help with this, it has some limitations. For one, when you are attempting to use it, you are typically already in a

resource crunch, and it takes a bit to open. Then when it does open, the CPU has settled back down, and you have no way of knowing what caused it. By using Task Manager, you can determine what process is causing a bottleneck at the CPU. For example, Figure 13-3 shows that in Task Manager, you can click the Processes tab and then click the CPU column to sort the processes with the top CPU users at the top. In Figure 13-3, the top user is Task Manager, which makes sense since it was just opened.

Figure 13-3 Task Manager A better tool to use is Sysinternals, which is a free download at https://docs.microsoft.com/sysinternals/. The specific part of this tool you need is Process Explorer, which enables you to see in the Notification area the top CPU offender, without requiring you to open Task Manager. Moreover, Process Explorer enables you to look at the graph that appears in Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone. In Figure 13-4, you can see that Process Explorer breaks down each process into its subprocesses.

An example of using Task Manager for threat hunting is to proactively look at times and dates when processor usage is high during times when system usage is typically low, indicating a malicious process at work.

FIGURE 13-4 Process Explorer Memory Consumption Another key indicator of a compromised host is increased memory consumption. Many times it is an indication that additional programs have been loaded into RAM so they can be processed. Then once they are loaded, they use RAM in the process of executing their tasks, whatever they may be. You can monitor memory consumption by using the same approach you use for CPU consumption. If memory usage cannot be accounted for, you should investigate it. (Review what you learned about buffer overflows, which are attacks that may display symptoms of increased memory consumption.)

REDUCING THE ATTACK SURFACE AREA Reducing the attack surface area means limiting the features and functions that are available to an attacker. For example, if I lock all doors to the facility with the exception of one, I have reduced the attack surface. Another term for reducing the attack surface area is system hardening because it involves ensuring that all systems have been hardened to the extent that is possible and still provide functionality. System Hardening Another of the ongoing goals of operations security is to ensure that all systems have been hardened to the extent that is possible and still provide functionality. System hardening can be accomplished both on physical and on logical bases. From a logical perspective:

Remove unnecessary applications. Disable unnecessary services. Block unrequired ports. Tightly control the connecting of external storage devices and media, if allowed at all.

System hardening is also done at the physical layer. Physical security was covered in Chapter 7 but some examples include Fences around the facility Locks on the doors Disabled USB ports Display filters Clean desk policy

Configuration Lockdown

Configuration lockdown (sometimes also called system lockdown) is a setting that can be implemented on devices including servers, routers, switches, firewalls, and virtual hosts. You set it on a device after that device is correctly configured, and it prevents any changes to the configuration, even by users who formerly had the right to configure the device. This setting helps support change control. Full tests for functionality of all services and applications should be performed prior to implementing this setting. Many products that provide this functionality offer a test mode, in which you can log any problems the current configuration causes without allowing the problems to completely manifest on the network. This allows you to identify and correct any problems prior to implementing full lockdown.

BUNDLING CRITICAL ASSETS While organizations should strive to protect all assets, in the cybersecurity world we tend to focus on what is at risk in the cyber world, which is our data. Bundling these critical digital assets helps to organize them so that security controls can be applied more cleanly with fewer possible human errors. Before bundling can be done, data must be classified. Data classification is covered in Chapter 6. Let’s talk about classification levels. Commercial Business Classifications Commercial businesses usually classify data using four main classification levels, listed here from highest sensitivity level to lowest:

1. Confidential 2. Private

3. Sensitive 4. Public

Data that is confidential includes trade secrets, intellectual data, application programming code, and other data that could seriously affect the organization if unauthorized disclosure occurred. Data at this level would only be available to personnel in the organization whose work relates to the data’s subject. Access to confidential data usually requires authorization for each access. In the United States, confidential data is exempt from disclosure under the Freedom of Information Act. In most cases, the only way for external entities to have authorized access to confidential data is as follows: After signing a confidentiality agreement When complying with a court order As part of a government project or contract procurement agreement

Data that is private includes any information related to personnel, including human resources records, medical records, and salary information, that is used only within the organization. Data that is sensitive includes organizational financial information and requires extra measures to ensure its CIA and accuracy. Public data is data whose disclosure would not cause a negative impact on the organization. Military and Government Classifications Military and government entities usually classify data using five main classification levels, listed here from highest sensitivity level to lowest:

1. Top secret: Data that is top secret includes weapon blueprints, technology specifications, spy satellite information, and other

military information that could gravely damage national security if disclosed. 2. Secret: Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed. 3. Confidential: Data that is confidential includes patents, trade secrets, and other information that could seriously affect the government if unauthorized disclosure occurred. 4. Sensitive but unclassified: Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security but could cause citizens to question the reputation of the government. 5. Unclassified: Military and government information that does not fall into any of the other four categories is considered unclassified and usually has to be granted to the public based on the Freedom of Information Act.

Distribution of Critical Assets One strategy that can help support resiliency is to ensure that critical assets are not all located in the same physical location. Collocating critical assets leaves your organization open to the kind of nightmare that occurred in 2017 at the Atlanta airport. When a fire took out the main and backup power systems (which were located together), the busiest airport in the world went dark for over 12 hours. Distribution of critical assets certainly enhances resilience.

ATTACK VECTORS An attack vector is a segment of the communication path that an attack uses to access a vulnerability. Each attack vector can be thought of as comprising a source of malicious content, a potentially vulnerable processor of that malicious content, and the nature of the malicious content itself. Recall from Chapter 2, “Utilizing Threat Intelligence to Support Organizational Security,” that the Common Vulnerability

Scoring System (CVSS) has as part of its Base metric group a metric called Attack Vector (AV). AV describes how the attacker would exploit the vulnerability and has four possible values: L: Stands for Local and means that the attacker must have physical or logical access to the affected system. A: Stands for Adjacent network and means that the attacker must be on the local network. N: Stands for Network and means that the attacker can cause the vulnerability from any network. P: Stands for Physical and requires the attacker to physically touch or manipulate the vulnerable component.

Analysts can use the accumulated CVSS information regarding attacks to match current characteristics of indicators of compromise to common attacks.

INTEGRATED INTELLIGENCE Integrated intelligence refers to the consideration and analysis of intelligence data from a perspective that combines multiple data sources and attempts to make inferences based on this data integration. Many vendors of security software and appliances often tout the intelligence integration capabilities of their products. SIEM systems are a good example, as described in Chapter 11, “Analyzing Data as Part of Security Monitoring Activities.” The Integrated Intelligence Center (IIC) is a unit at the Center for Internet Security (CIS) that focuses on merging cybersecurity and physical security to aid governments in dealing with emerging threats. IIC attempts to create predictive models using the multiple data sources at its disposal.

IMPROVING DETECTION CAPABILITIES

Detection of events and incidents as they occur is critical. Organizations should be constantly trying to improve their detection capabilities. Continuous Improvement Security professionals can never just sit back, relax, and enjoy the ride. Security needs are always changing because the “bad guys” never take a day off. It is therefore vital that security professionals continuously work to improve their organization’s security. Tied into this is the need to improve the quality of the security controls currently implemented. Quality improvement commonly uses a four-step quality model, known as Deming’s Plan–D–Check–Act cycle, the steps for which are as follows:

1. Plan: Identify an area for improvement and make a formal plan to implement it. 2. Do: Implement the plan on a small scale. 3. Check: Analyze the results of the implementation to determine whether it made a difference. 4. Act: If the implementation made a positive change, implement it on a wider scale. Continuously analyze the results.

This can’t be done without establishing some metrics to determine how successful you are now. Continuous Monitoring Any logging and monitoring activities should be part of an organizational continuous monitoring program. The continuous monitoring program must be designed to meet the needs of the organization and implemented correctly to ensure that the organization’s critical infrastructure is guarded. Organizations may want to look into Continuous Monitoring as a Service (CMaaS) solutions deployed by cloud service providers.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 13-2 lists a reference of these key topics and the page numbers on which each is found.

Table 13-2 Key Topics in Chapter 13

Key Topic Element

Description

Page Number

Figure 13-1

Scientific method

404

Figure 13-2

OWASP threat model

406

Bulleted list

Threat modeling tools

407

Bulleted list

System hardening

410

Numbered list

Commercial business classifications

411

Numbered list

Military and government classifications

412

Numbered list

Deming’s Plan–Do–Check–Act

413

cycle

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: threat actors hacker cracker hunt teaming threat model executable process analysis Process Explorer system hardening configuration lockdown sensitivity criticality attack vector integrated intelligence

REVIEW QUESTIONS 1. Place the following steps of the scientific method in order.

Step Analyze the results Conduct an experiment Make a conclusion Ask a question

State a hypothesis

2. List and describe at least one threat modeling tool. 3. ____________________ allows you to determine when a CPU is struggling with malware. 4. Match the following terms with their definitions.

Terms

Definitions

System hardenin g

Prevents any changes to the configuration, even by users who formerly had the right to configure the device

Configura tion lockdown

Critical to all systems to protect the confidentiality, integrity, and availability (CIA) of data

Data classificat ion policy

A measure of how freely data can be handled

Sensitivit y

Ensures that all systems have been secured to the fullest extent possible and still provide functionality

Criticality

A measure of the importance of the data

5. List the military/government data classification levels in order. 6. A(n) _____________________ is a segment of the communication path that an attack uses to access a vulnerability. 7. Match the following terms with their definitions.

Terms

Definitions

Intel ligen ce inte grati on

Solution deployed by cloud service providers for improvement

CMa aS

Foreign government interested in pilfering data, including intellectual property

Hun t tea min g

The consideration and analysis of intelligence data from a perspective that combines multiple data sources and attempts to make inferences based on this data integration

Stat e spon sor

New approach to security that is offensive in nature rather than defensive, which has been common for security teams in the past

8. List at least two hardening techniques. 9. Data should be classified based on its _____________ to the organization and its ____________ to disclosure. 10. Match the following terms with their definitions.

Term s Pr oce ss Ex

Definitions

A proposed explanation of something

plo rer Hy pot hes is

Actor with malicious intent

Bla ck hat

Enables you to look at the graph that appears in Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone

Th rea t mo del

A conceptual design that attempts to provide a framework on which to implement security efforts

Chapter 14

Automation Concepts and Technologies This chapter covers the following topics related to Objective 3.4 (Compare and contrast automation concepts and technologies) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Workflow orchestration: Describes the process of Security Orchestration, Automation, and Response (SOAR) and its role in security. Scripting: Reviews the scripting process and its role in automation. Application programming interface (API) integration: Describes how this process reduces access to an application’s internal functions through an API. Automated malware signature creation: Identifies an automated process of malware identification. Data enrichment: Discusses processes used to enhance, refine, or otherwise improve raw data. Threat feed combination: Defines a process for making use of data from multiple intelligence feeds. Machine learning: Describes the role machine learning plays in automated security. Use of automation protocols and standards: Identifies various protocols and standards, including Security Content Automation Protocol (SCAP), and their application. Continuous integration: Covers the process of ongoing integration of software components during development.

Continuous deployment/delivery: Covers the process of ongoing review and upgrade of software.

Traditionally, network operations and threat intelligence activities were performed manually by technicians. Increasingly in today’s environments, these processes are being automated through the use of scripting and other automation tools. This chapter explores how workflows can be automated.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these ten self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 14-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 14-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questio n

Workflow Orchestration

1

Scripting

2

Application Programming Interface (API) Integration

3

Automated Malware Signature Creation

4

Data Enrichment

5

Threat Feed Combination

6

Machine Learning

7

Use of Automation Protocols and Standards

8

Continuous Integration

9

Continuous Deployment/Delivery

10

1. Which of the following enables you to automate the response to a security issue? (Choose the best answer.) 1. Orchestration 2. Piping 3. Scripting 4. Virtualization

2. Which scripting language is used to work in the Linux interface? 1. Python 2. Bash 3. Ruby 4. Perl

3. Which of the following is used to provide integration between your website and a payment gateway? 1. Perl 2. Orchestration 3. API

4. Scripting

4. Which of the following is an additional method of identifying malware? 1. DHCP snooping 2. DAI 3. Automated malware signature creation 4. Piping

5. When you receive bulk e-mail from a vendor and it refers to you by first name, what technique is in use? 1. Scripting 2. Orchestration 3. Heuristics 4. Data enrichment

6. Threat feeds inform the recipient about all but which of the following? 1. Presence of malware on the recipient 2. Suspicious domains 3. Lists of known malware hashes 4. IP addresses associated with malicious activity

7. Which of the following is an example of machine learning? 1. NAC 2. AEG 3. EDR 4. DLP

8. Which of the following is a standard that the security automation community uses to enumerate software flaws

and configuration issues? 1. NAC 2. DAC 3. SCAP 4. DLP

9. Which of the following is a software development practice whereby the work of multiple individuals is combined a number of times a day? 1. Sinkholing 2. Continuous integration 3. Aggregation 4. Inference

10. Which of the following is considered the next generation of DevOps and attempts to make sure that software developers can release new product changes to customers quickly in a sustainable way? 1. Agile 2. DevSecOps 3. Continuous deployment/delivery 4. Scrum

FOUNDATION TOPICS WORKFLOW ORCHESTRATION Workflow orchestration is the sequencing of events based on certain parameters by using scripting and scripting tools. Over time orchestration has been increasingly used to automate processes that were formerly carried out manually by humans.

In virtualization, it is quite common to use orchestration. For example, in the VMware world, technicians can create what are called apps, groups of virtual machines that are managed and orchestrated as a unit to provide a service to users. Using orchestration tools, you can set one device to always boot before another device. For example, in an Windows Active Directory environment, you may need the domain controller (DC) to boot up before the database server so that the database server can property authenticate to the DC and function correctly. Figure 14-1 shows another, more complex automated workflow orchestration using VMware vCloud Automation Center (vCAC).

Figure 14-1 Workflow Orchestration The workflow is sequenced to occur in the following fashion: 1. A request comes in to write to the disk. 2. The disk space is checked. 3. Insufficient space is found. 4. A change request is generated for more space. 5. A disk is added. 6. The configuration database is updated. 7. The user is notified.

While this is one use of workflow orchestration, it can also be used in the security world. Examples include Dynamic incident response plans that adapt in real time Automated workflows to empower analysts and enable faster response

SCRIPTING Scripting languages and scripting tools are used to automate a process. Common scripting languages include

bash: Used to work in the Linux interface Node js: Framework to write network applications using JavaScript Ruby: Great for web development Python: Supports procedure-oriented programming and objectoriented programming Perl: Found on all Linux servers, helps in text manipulation tasks Windows PowerShell: Found on all Windows servers

Scripting tools that require less knowledge of the actual syntax of the language can also be used, such as

Puppet Chef Ansible

For example, Figure 14-2 shows Puppet being used to automate the update of Apache servers.

FIGURE 14-2 Puppet Orchestration

APPLICATION PROGRAMMING INTERFACE (API) INTEGRATION As a review, an API is a set of clearly defined methods of communication between various software components. As such, you should think of an API as a connection point that requires security consideration. For example, an API between your ecommerce site and a payment gateway must be secure. So, what is API integration and why is it important? API integration means that the applications on either end of the API are synchronized and protecting the integrity of the information that passes across the API. It also enables the proper updating and versioning required in many environments. The term also describes the relationship between a website and an API when the API is integrated into the website.

AUTOMATED MALWARE SIGNATURE CREATION

Automated malware signature creation is an additional method of identifying malware. The antivirus software monitors incoming unknown files for the presence of malware and analyzes each file based on both classifiers of file behavior and classifiers of file content. The file is then classified as having a particular malware classification. Subsequently, a malware signature is generated for the incoming unknown file based on the particular malware classification. This malware signature can be used by an antivirus program as a part of the antivirus program’s virus identification processes.

DATA ENRICHMENT Data enrichment is a technique that allows one process to gather information from another process or source and then customize a response to a third source using the data from the second process or source. When you receive bulk e-mail from a vendor and it refers to you by first name, that is an example of data enrichment in use. In that case a file of email addresses is consulted (second process) and added to the response to you. Another common data enrichment process would, for example, correct likely misspellings or typographical errors in a database by using precision algorithms designed for that purpose. Another way in which data enrichment can work is by extrapolating data. This can create a privacy issue that has been raised by the EU General Data Protection Regulation (GDPR), leading to some privacy regulations by the EU that limit data enrichment for this very reason. Users typically have a reasonable idea about which information they have provided to a specific organization, but if the organization adds information from other databases, this picture will be skewed. The organization will have information about them of which they are not aware.

Figure 14-3 shows another security-related example of the data enrichment process. This is an example of an automated process used by a security analytics platform called Blue Coat. The data enrichment part of the process occurs at Steps 4 and 5 when information from an external source is analyzed and used to enrich the alert message that is generated from the file detected.

Figure 14-3 Data Enrichment Process Example

THREAT FEED COMBINATION A threat feed is a constantly updating stream of intelligence about threat indicators and artifacts that is delivered by a thirdparty security organization. Threat feeds are used to inform the organization as quickly as possible about new threats that have been identified. Threat feeds contain information including Suspicious domains Lists of known malware hashes IP addresses associated with malicious activity

Chapter 11, “Analyzing Data as Part of Security Monitoring Activities,” described how a SIEM aggregates the logs from various security devices into a single log for analysis. By analyzing the single aggregated log, inferences can be made

about potential issues or attacks that would not be possible if the logs were analyzed separately. Using SIEM (or other aggregation tools) to aggregate threat feeds can also be beneficial, and tools and services such as the following offer this type of threat feed combination:

Combine: Gathers threat intelligence feeds from publicly available sources Palo Alto Networks AutoFocus: Provides intelligence, correlation, added context, and automated prevention workflows Anomali ThreatStream: Helps deduplicate data, removes false positives, and feeds intelligence to security tools ThreatQuotient: Helps accelerate security operations with an integrated threat library and shared contextual intelligence ThreatConnect: Combines external threat data from trusted sources with in-house data

MACHINE LEARNING Artificial intelligence (AI) and machine learning have fascinated humans for decades. Artificial intelligence (AI) is the capability of a computer system to make decisions using human-like intelligence. Machine learning is a way to make that possible by creating algorithms that enable the system to learn from what it sees and apply it. Since the first time we conceived of the idea of talking to a computer and getting an answer like characters did in comic books years ago, we have waited for the day to come when smart robots would not just do the dirty work but also learn just as humans do. Today, robots are taking on increasingly more and more detailed work. One of the exciting areas where AI and machine learning are yielding dividends is in intelligent network security

—or the intelligent network. These networks seek out their own vulnerabilities before attackers do, learn from past errors, and work on a predictive model to prevent attacks. For example, automatic exploit generation (AEG) is the “first end-to-end system for fully automatic exploit generation,” according to the Carnegie Mellon Institute’s own description of its AI named Mayhem. Developed for off-the-shelf as well as the enterprise software being increasingly used in smart devices and appliances, AEG can find a bug and determine whether it is exploitable.

USE OF AUTOMATION PROTOCOLS AND STANDARDS As in almost every other area of IT, standards and protocols for automation have emerged to help support the development and sharing of threat information. As with all standards, the goal is to arrive at common methods of sharing threat data. Security Content Automation Protocol (SCAP) Chapter 2, “Utilizing Threat Intelligence to Support Organizational Security,” introduced the Common Vulnerability Scoring System (CVSS), a common system for describing the characteristics of a threat in a standard format. The ranking of vulnerabilities that are discovered is based on predefined metrics that also are used by the Security Content Automation Protocol (SCAP). This is a standard that the security automation community uses to enumerate software flaws and configuration issues. It standardized the nomenclature and formats used. A vendor of security automation products can obtain a validation against SCAP, demonstrating that it will interoperate with other scanners and express the scan results in a standardized way.

Understanding the operation of SCAP requires an understanding of its identification schemes, one of which you learned about, CVE. Let’s review it.

Common Configuration Enumeration (CCE): These are configuration best practice statements maintained by the National Institute of Standards and Technology (NIST). Common Platform Enumeration (CPE): These are methods for describing and classifying operating systems, applications, and hardware devices. Common Weakness Enumeration (CWE): These are design flaws in the development of software that can lead to vulnerabilities. Common Vulnerabilities and Exposures (CVE): These are vulnerabilities in published operating systems and applications software.

A good example of the implementation of this is the Window System Center Configuration Manager Extensions for SCAP. It allows for the conversion of SCAP data files to Desired Configuration Management (DCM) Configuration Packs and converts DCM reports into SCAP format.

CONTINUOUS INTEGRATION Continuous integration is a software development practice whereby the work of multiple individuals is combined a number of times a day. The idea behind this is to identity bugs as early as possible in the development process. As it relates to security, the goal of continuous integration is to locate security issues as soon as possible. Continuous integration security testing improves code integrity, leads to more secure software systems, and reduces the time it takes to release new updates. Usually, merging all development versions of the code base occurs

multiple times throughout a day. Figure 14-4 illustrates this process.

Figure 14-4 Continuous Integration

CONTINUOUS DEPLOYMENT/DELIVERY Taking continuous integration one step further is the concept of continuous deployment/delivery. It is considered the next gen of DevOps and attempts to make sure that software developers can release new changes to customers quickly in a sustainable way. Continuous deployment goes one step further with every change that passes all stages of your production pipeline being released to your customers. This helps to improve the feedback loop. Figure 14-5 illustrates the relationship between the three concepts.

Figure 14-5 Continuous Integration, Continuous Delivery, and Continuous Deployment

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 14-2 lists a reference of these key topics and the page numbers on which each is found.

Table 14-2 Key Topics in Chapter 14

Key Topic Element

Figur e 14-1

Description

Workflow orchestration

Page Number

4 2

2 Bullet ed list

Common scripting languages

4 2 3

Bullet ed list

Scripting tools

4 2 3

Figur e 14-3

Data enrichment example

4 2 5

Bullet ed list

Threat feed aggregation tools

4 2 6

Bullet ed list

SCAP components

4 2 7

Figur e 14-4

Continuous integration

4 2 8

Figur e 14-5

Continuous integration, continuous delivery, and continuous deployment

4 2 9

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: workflow orchestration scripting

application programming interface (API) integration bash Node.js Ruby Python Perl automated malware signature creation data enrichment threat feed machine learning Security Content Automation Protocol (SCAP) Common Configuration Enumeration (CCE) Common Platform Enumeration (CPE) Common Weakness Enumeration (CWE) Common Vulnerabilities and Exposures (CVE) continuous integration continuous deployment/delivery

REVIEW QUESTIONS 1. _______________ is the sequencing of events based on certain parameters by using scripting and scripting tools. 2. List at least one use of workflow orchestration in the security world. 3. Match the following terms with their definitions.

Term s

Definitions

Ru by

Used to work in the Linux interface

Perl

Framework to write network applications using JavaScript

Pyt hon

Supports procedure-oriented programming and objectoriented programming

bas h

Great for web development

4. __________________ is a scripting tool found in Windows servers. 5. List at least two of the components of SCAP. 6. Puppet is a ________________________ tool. 7. List at least two types of information available from threat feeds. 8. Match the following SCAP terms with their definitions.

Ter ms

Definitions

C C E

Methods for describing and classifying operating systems, applications, and hardware devices

C V E

Vulnerabilities in published operating systems and applications software

C W E

Design flaws in the development of software that can lead to vulnerabilities

C P E

Configuration best practice statements maintained by NIST

9. _________________________ is a software development practice whereby the work of multiple individuals is combined a number of times a day. 10. List at least two threat feed aggregation tools. 11. Match the following terms with their definitions.

Terms

Definitions

Threat feed

Technique that allows one process to gather information from another process or source and then customize a response using the data from the second process or source

Data enrichm ent

Set of clearly defined methods of communication between various software components

Automat ed malware signatur e creation

Constantly updating streams of indicators or artifacts derived from a source outside the organization

API

Additional method of identifying malware

12. ______________________ are groups of VMware virtual machines that are managed and orchestrated as a unit to provide a service to users.

Chapter 15

The Incident Response Process This chapter covers the following topics related to Objective 4.1 (Explain the importance of the incident response process) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Communication plan: Describes the proper incident response processes for communication during an incident, which includes limiting communications to trusted parties, disclosing based on regulatory/legislative requirements, preventing inadvertent release of information, using a secure method of communication, and reporting requirements. Response coordination with relevant entities: Describes the entities with which coordination is required during an incident, including legal, human resources, public relations, internal and external, law enforcement, senior leadership, and regulatory bodies. Factors contributing to data criticality: Identifies factors that determine the criticality of an information resource, which include personally identifiable information (PII), personal health information (PHI), sensitive personal information (SPI), high value asset, financial information, intellectual property, and corporate information.

The incident response process is a formal approach to responding to security issues. It attempts to avoid the haphazard approach that can waste time and resources. This chapter and the next chapter examine this process.

“DO I KNOW THIS ALREADY?” QUIZ

The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these six self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks.” Table 15-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 15-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questions

Communication Plan

1, 2

Response Coordination with Relevant Entities

3, 4

Factors Contributing to Data Criticality

5, 6

1. Which of the following is false with respect to the incident response communication plan? 1. Organizations in certain industries may be required to comply with regulatory or legislative requirements with regard to communicating data breaches. 2. Content of these communications should include as much information as possible. 3. All responders should act to prevent the disclosure of any information to parties that are not specified in the communication plan. 4. All communications that takes place between the stakeholders should use a secure communication process.

2. Which of the following HIPAA rules requires covered entities and their business associates to provide notification following a breach of unsecured PHI? 1. Breach Notification Rule 2. Privacy Rule 3. Security Rule 4. Enforcement Rule

3. Which of the following is responsible for reviewing NDAs to ensure support for incident response efforts? 1. Human resources 2. Legal 3. Management 4. Public relations

4. Which of the following is responsible for developing all written responses to the outside world concerning an incident and its response? 1. Human resources 2. Legal 3. Management 4. Public relations

5. Which of the following is any piece of data that can be used alone or with other information to identify a single person? 1. Intellectual property 2. Trade secret 3. PII 4. PPP

6. Which of the following is not intellectual property?

1. Patent 2. Trade secret 3. Trademark 4. Contract

FOUNDATION TOPICS COMMUNICATION PLAN Over time, best practices have evolved for handling the communication process between stakeholders. By following these best practices, you have a greater chance of maintaining control of the process and achieving the goals of incident response. Failure to follow these guidelines can lead to lawsuits, the premature alerting of the suspected party, potential disclosure of sensitive information, and, ultimately, an incident response process that is less effective than it could be. Limiting Communication to Trusted Parties During an incident, communications should take place only with those who have been designated beforehand to receive such communications. Moreover, the content of these communications should be limited to what is necessary for each stakeholder to perform his or her role. Disclosing Based on Regulatory/Legislative Requirements Organizations in certain industries may be required to comply with regulatory or legislative requirements with regard to communicating data breaches to affected parties and to those agencies and legislative bodies promulgating these regulations. The organization should include these communication types in the communication plan. Preventing Inadvertent Release of Information

All responders should act to prevent the disclosure of any information to parties that are not specified in the communication plan. Moreover, all information released to the public and the press should be handled by public relations or persons trained for this type of communication. The timing of all communications should also be specified in the plan. Using a Secure Method of Communication All communications that take place between the stakeholders should use a secure communication process to ensure that information is not leaked or sniffed. Secure communication channels and strong cryptographic mechanisms should be used for these communications. The best approach is to create an out-of-band method of communication, which does not use the regular methods of corporate e-mail or VoIP. While personal cell phones can be a method for voice communication, file and data exchange should be through a method that provides endto-end encryption, such as Off-the-Record (OTR) Messaging. Reporting Requirements Beyond the communication requirements within the organization, there may be legal obligations to report to agencies or governmental bodies during and following a security incident. Especially when sensitive customer, vendor, or employee records are exposed, organizations are required to report this in a reasonable time frame. For example, in the healthcare field, the HIPAA Breach Notification Rule, 45 CFR §§ 164.400-414, requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured protected health information (PHI). As another example, all 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information

involving personally identifiable information (PII). PHI and PII are described in more detail later in this chapter.

RESPONSE COORDINATION WITH RELEVANT ENTITIES During an incident, proper communication among the various stakeholders in the process is critical to the success of the response. One key step that helps ensure proper communication is to select the right people for the incident response (IR) team. Because these individuals will be responsible for communicating with stakeholders, communication skills should be a key selection criterion for the IR team. Moreover, this team should take the following steps when selecting individuals to represent each stakeholder community:

Select representatives based on communication skills. Hold regular meetings. Use proper escalation procedures.

The following sections identify these stakeholders, discuss why the communication process is important, describe best practices for the communication process, and list the responsibilities of various key roles involved in the response. Legal The role of the legal department is to do the following:

Review nondisclosure agreements (NDAs) to ensure support for incident response efforts. Develop wording of documents used to contact possibly affected sites and organizations.

Assess site liability for illegal computer activity.

Human Resources The role of the HR department involves the following responsibilities in response:

Develop job descriptions for those persons who will be hired for positions involved in incident response. Create policies and procedures that support the removal of employees found to be engaging in improper or illegal activity.

For example, HR should ensure that these activities are spelled out in policies and new hire documents as activities that are punishable by firing. This can help avoid employment disputes when the firing occurs. Public Relations

The role of public relations is managing the dialog between the organization and the outside world. One person should be designated to do all talking to the media so as to maintain a consistent message. Responsibilities of the PR department include the following: Handling all press conferences that may be held Developing all written response to the outside world concerning the incident and its response

Internal and External Most of the stakeholders will be internal to the organization but not all. External stakeholders (law enforcement, industry organizations, and media) should be managed separately from

the internal stakeholders. Communications to external stakeholders may require a different and more secure medium. Law Enforcement Law enforcement may become involved in many incidents. Sometimes they are required to become involved, but in many instances, the organization is likely to invite law enforcement to get involved. When making a decision about whether to involve law enforcement, consider the following factors:

Law enforcement will view the incident differently than the company security team views it. While your team may be more motivated to stop attacks and their damage, law enforcement may be inclined to let an attack proceed in order to gather more evidence. The expertise of law enforcement varies. While contacting local law enforcement may be indicated for physical theft of computers and similar incidents, involving law enforcement at the federal level, where greater skill sets are available, may be indicated for more abstract crimes and events. The USA PATRIOT Act enhanced the investigatory tools available to law enforcement and expanded their ability to look at e-mail communications, telephone records, Internet communications, medical records, and financial records, which can be helpful. Before involving law enforcement, try to rule out other potential causes of an event, such as accidents and hardware or software failure. In cases where laws have obviously been broken (child pornography, for example), immediately get law enforcement involved. This includes any felonies, regardless of how small the loss to the company may have been.

Senior Leadership The most important factor in the success of an incident response plan is the support, both verbal and financial (through

the budget process), of senior leadership. Moreover, all other levels of management should fall in line with support of all efforts. Specifically, senior leadership’s role involves the following:

Communicate the importance of the incident response plan to all parts of the organization. Create agreements that detail the authority of the incident response team to take over business systems if necessary. Create decision systems for determining when key systems must be removed from the network.

Regulatory Bodies Earlier in this chapter you learned that there are reporting requirements to certain governmental bodies when a data breach occurs. This makes these agencies external stakeholders. Be aware of reporting requirements based on the industry in which the organization operates. An incident response should be coordinated with any regulatory bodies that regulate the industry in which the organization operates.

FACTORS CONTRIBUTING TO DATA CRITICALITY Once the sensitivity and criticality of data are understood and documented, the organization should work to create a data classification system. Most organizations use either a commercial business classification system or a military and government classification system. To properly categorize data types, a security analyst should be familiar with some of the most sensitive types of data that the organization may possess.

When responding to an incident the criticality of the data at risk should be a prime consideration when assigning resources to the incident. When the data at risk is more critical, the more resources should be assigned to the issue, because it becomes more important that time is of the essence to identify and correct any settings or policies that are implicated in the incident. Personally Identifiable Information (PII) When considering technology and its use today, privacy is a major concern of users. This privacy concern usually involves three areas: which personal information can be shared with whom, whether messages can be exchanged confidentially, and whether and how one can send messages anonymously. Privacy is an integral part of any security measures that an organization takes.

As part of the security measures that organizations must take to protect privacy, personally identifiable information (PII) must be understood, identified, and protected. PII is any piece of data that can be used alone or with other information to identify a single person. Any PII that an organization collects must be protected in the strongest manner possible. PII includes full name, identification numbers (including driver’s license number and Social Security number), date of birth, place of birth, biometric data, financial account numbers (both bank account and credit card numbers), and digital identities (including social media names and tags). Keep in mind that different countries and levels of government can have different qualifiers for identifying PII. Security professionals must ensure that they understand international, national, state, and local regulations and laws regarding PII. As the theft of this data becomes even more prevalent, you can

expect more laws to be enacted that will affect your job. Figure 15-1 shows examples of PII.

FIGURE 15-1 Personally Identifiable Information The most obvious reaction to the issue of privacy is the measures in the far-reaching EU General Data Protection Regulation (GDPR) The GDPR aims primarily to give control to individuals over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. Personal Health Information (PHI) One particular type of PII that an organization might possess is personal health information (PHI). PHI includes the medical records of individuals and must be protected in specific ways, as prescribed by the regulations contained in the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA, also known as the Kennedy-Kassebaum Act, affects all healthcare facilities, health insurance companies, and healthcare clearinghouses. It is enforced by the Office for Civil Rights (OCR) of the Department of Health and Human Services (HHS). It provides standards and procedures for storing, using,

and transmitting medical information and healthcare data. HIPAA overrides state laws unless the state laws are stricter. Additions to this law now extend its requirements to third parties that do work for covered organizations in which those parties handle this information. Note Objective 4.1 of the CySA+ exam refers to PHI as personal health information, whereas HIPAA refers to it as protected health information.

Sensitive Personal Information (SPI) Some types of information should receive special treatment, and certain standards have been designed to protect this information. This type of data is called sensitive personal information (SPI). The best example of this is credit card information. Almost all companies possess and process credit card data. Holders of this data must protect it. Many of the highest-profile security breaches that have occurred have involved the theft of this data. The Payment Card Industry Data Security Standard (PCI DSS) affects any organizations that handle cardholder information for the major credit card companies. The latest version at the time of writing is 3.2.1. To prove compliance with the standard, an organization must be reviewed annually. Although PCI DSS is not a law, this standard has affected the adoption of several state laws. High Value Assets Some assets are not actually information but systems that provide access to information. When these systems or groups of systems provide access to data required to continue to do business, they are called critical systems. While it is somewhat simpler to arrive at a value for physical assets such as servers, routers, switches, and other devices, in cases where these systems provide access to critical data or are required to continue a business-critical process, their value is more than the

replacement cost of the hardware. The assigned value should be increased to reflect its importance in providing access to data or its role in continuing a critical process. Financial Information Financial and accounting data in today’s networks is typically contained in accounting information systems (AISs). While these systems offer valuable integration with other systems, such as HR and customer relationship management systems, this integration comes at the cost of creating a secure connection between these systems. Many organizations are also abandoning legacy accounting software for cloud-based vendors to maximize profit. Cloud arrangements bring their own security issues, such as the danger of data comingling in the multitenancy environment that is common in public clouds. Moreover, considering that a virtual infrastructure underlies these cloud systems, all the dangers of the virtual environment come into play. Considering the criticality of this data and the need of the organization to keep the bulk of it confidential, incidents that target this type of information or the systems that provide access to this data should be given high priority. The following steps should be taken to protect this information:

Always ensure physical security of the building. Ensure that a firewall is deployed at the perimeter and make use of all its features, such as URL and application filtering, intrusion prevention, antivirus scanning, and remote access via virtual private networks and TLS/SSL encryption. Diligently audit file and folder permissions on all server resources. Encrypt all accounting data. Back up all accounting data and store it on servers that use redundant technologies such as RAID.

Intellectual Property Intellectual property is a tangible or intangible asset to which the owner has exclusive rights. Intellectual property law is a group of laws that recognize exclusive rights for creations of the mind. The intellectual property covered by this type of law includes the following:

Patents Trade secrets Trademarks Copyrights

The following sections explain these types of intellectual properties and their internal protection. Patent A patent is granted to an individual or a company to protect an invention that is described in the patent’s application. When the patent is granted, only the patent owner can make, use, or sell the invention for a period of time, usually 20 years. Although a patent is considered one of the strongest intellectual property protections available, the invention becomes public domain after the patent expires, thereby allowing any entity to manufacture and sell the product. Patent litigation is common in today’s world. You commonly see technology companies, such as Apple, HP, and Google, filing lawsuits regarding infringement on patents (often against each other). For this reason, many companies involve a legal team in patent research before developing new technologies. Being the first to be issued a patent is crucial in today’s highly competitive market.

Any product that is produced and is currently undergoing the patent application process is usually identified with the Patent Pending seal, shown in Figure 15-2.

Figure 15-2 Patent Pending Seal Trade Secret A trade secret ensures that proprietary technical or business information remains confidential. A trade secret gives an organization a competitive edge. Trade secrets include recipes, formulas, ingredient listings, and so on that must be protected against disclosure. After a trade secret is obtained by or disclosed to a competitor or the general public, it is no longer considered a trade secret. Most organizations that have trade secrets attempt to protect them by using nondisclosure agreements (NDAs). An NDA must be signed by any entity that has access to information that is part of a trade secret. Anyone who signs an NDA will suffer legal consequences if the organization is able to prove that the signer violated it. Trademark A trademark ensures that a symbol, a sound, or an expression that identifies a product or an organization is protected from being used by another organization. A trademark allows a product or an organization to be recognized by the general public. Most trademarks are marked with one of the designations shown in Figure 15-3. If a trademark is not registered, an organization should use a capital TM. If the

trademark is registered, an organization should use a capital R that is encircled.

Figure 15-3 Trademark Designations Copyright A copyright ensures that a work that is authored is protected from any form of reproduction or use without the consent of the copyright holder, usually the author or artist who created the original work. A copyright lasts longer than a patent. Although the U.S. Copyright Office has several guidelines to determine the amount of time a copyright lasts, the general rule for works created after January 1, 1978, is the life of the author plus 70 years. In 1996, the World Intellectual Property Organization (WIPO) standardized the treatment of digital copyrights. Copyright management information (CMI) is licensing and ownership information that is added to any digital work. In this standardization, WIPO stipulated that CMI included in copyrighted material cannot be altered. The © symbol denotes a work that is copyrighted. Securing Intellectual Property Intellectual property of an organization, including patents, copyrights, trademarks, and trade secrets, must be protected, or the business loses any competitive advantage created by such properties. To ensure that an organization retains the advantages given by its IP, it should do the following:

Invest in well-written NDAs to be included in employment agreements, licenses, sales contracts, and technology transfer agreements. Ensure that tight security protocols are in place for all computer systems. Protect trade secrets residing in computer systems with encryption technologies or by limiting storage to computer systems that do not have external Internet connections. Deploy effective insider threat countermeasures, particularly focused on disgruntlement detection and mitigation techniques.

Corporate Information Corporate confidential data is anything that needs to be kept confidential within the organization. This can include the following:

Plan announcements Processes and procedures that may be unique to the organization Profit data and estimates Salaries Market share figures Customer lists Performance appraisals

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 15-2 lists a reference of these key topics and the page numbers on which each is found.

Table 15-2 Key Topics in Chapter 15

Key Topic Element

Description

Page Number

Bulle ted list

Considerations when selecting individuals to represent each stakeholder community

4 3 6

Bulle ted list

Role of the legal department in incident response

4 3 6

Bulle ted list

Role of the HR department in incident response

4 3 7

Bulle ted list

Role of the public relations department in incident response

4 3 7

Bulle ted list

Considerations when making a decision about whether to involve law enforcement

4 3 8

Bulle ted list

Role of senior leadership in incident response

4 3 8

Para grap

Description of personally identifiable information (PII)

4 3

h

9

Bulle ted list

Steps that should be taken to protect financial information

4 4 2

Bulle ted list

Examples of intellectual property

4 4 2

Bulle ted list

Securing intellectual property

4 4 4

Bulle ted list

Examples of corporate confidential data

4 4 4

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: HIPAA Breach Notification Rule USA PATRIOT Act sensitivity criticality personally identifiable information (PII) personal health information (PHI) sensitive personal information (SPI) Payment Card Industry Data Security Standard (PCI DSS) intellectual property patent trade secret trademark

copyright

REVIEW QUESTIONS 1. After a breach, all information released to the public and the press should be handled by _________________. 2. List at least one job of the human resources department with regard to incident response. 3. Match the following terms with their definitions

Terms

Definitions

HIPAA Breach Notification Rule

Enhanced the investigatory tools available to law enforcement

USA PATRIOT Act

Affects any organizations that handle cardholder information for the major credit card companies

Payment Card Industry Data Security Standard (PCI DSS)

Requires covered entities and their business associates to provide notification following a loss of unsecured protected health information (PHI)

KennedyKassebaum Act

Also known as HIPAA

4. It is the role of ____________________ to develop job descriptions for those persons who will be hired for positions involved in incident response. 5. List at least one of the roles of senior leadership in incident response. 6. Match the following terms with their definitions.

Terms

Definitions

Personally identifiable information

Measure of the importance of the data

Criticality

Any piece of data that can be used alone or with other information to identify a single person

Sensitivity

Medical records of individuals

Personal health information

Measure of how freely data can be handled

7. The most important factor in the success of an incident response plan is the support, both verbal and financial (through the budget process), of ________________ 8. List at least one consideration when assigning a level of criticality. 9. Match the following terms with their definitions.

Term s

Definitions

Pat ent

Gives an organization a competitive edge; includes recipes, formulas, ingredient listings, and so on

Tra de sec ret

Identifies a product protected from being used by another organization

Tra de

Ensures that a work that is authored is protected from any form of reproduction or use without the consent of the

ma rk

holder

Co pyr igh t

Granted to an individual or a company to protect an invention

10. Salaries of employees is considered _________________________________________ _____

Chapter 16

Applying the Appropriate Incident Response Procedure This chapter covers the following topics related to Objective 4.2 (Given a scenario, apply the appropriate incident response procedure) of the CompTIA Cybersecurity Analyst (CySA+) CS0002 certification exam: Preparation: Describes steps required to be ready for an incident, including training, testing, and documentation of procedures. Detection and analysis: Covers detection methods and analysis, exploring topics such as characteristics contributing to severity level classification, downtime, recovery time, data integrity, economic impact, system process criticality, reverse engineering, and data correlation. Containment: Identifies methods used to separate and confine the damage, including segmentation and isolation. Eradication and recovery: Defines activities that return the network to normal, including vulnerability mitigation, sanitization, reconstruction/reimaging, secure disposal, patching, restoration or permissions, reconstitution of resources, restoration of capabilities and services, and verification of logging/communication to security monitoring. Post-incident activities: Identifies operations that should follow incident recovery, including evidence retention, lessons learned report, change control process, incident response plan update, incident summary report, IoC generation, and monitoring.

When a security incident occurs, there are usually several possible responses. Choosing the correct response and appropriately applying that response is a critical part of the process. This second chapter devoted to the incident response process presents the many considerations that go into making the correct decisions regarding response.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these ten self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 16-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 16-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questions

Preparation

1, 2

Detection and Analysis

3, 4

Containment

5, 6

Eradication and Recovery

7, 8

Post-Incident Activities

9, 10

1. Which of the following is the first step in the incident response process? 1. Containment 2. Eradication and recovery 3. Preparation 4. Detection

2. Which of the following groups should receive technical training on configuring and maintaining security controls? 1. High-level management 2. Middle management 3. Technical staff 4. Employees

3. Which of the following characteristics of an incident is a function of how widespread the incident is? 1. Scope 2. Downtime 3. Data integrity 4. Indicator of compromise

4. Which of the following is the average time required to repair a single resource or function? 1. RPO 2. MTD 3. MTTR 4. RTO

5. Which of the following processes involves limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments?

1. Isolation 2. Segmentation 3. Containerization 4. Partitioning

6. How do you isolate a device at Layer 2 without removing it from the network? 1. Port security 2. Isolation 3. Secured memory 4. Processor encryption

7. Which of the following includes removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools? 1. Destruction 2. Clearing 3. Purging 4. Buffering

8. Which of the following refers to removing all traces of a threat by overwriting the drive multiple times to ensure that the threat is removed? 1. Destruction 2. Clearing 3. Purging 4. Sanitization

9. Which of the following refers to behaviors and activities that precede or accompany a security incident? 1. IoCs

2. NOCs 3. IONs 4. SOCs

10. Which of the following is the first document that should be drafted after recovery from an incident? 1. Incident summary report 2. Incident response plan 3. Lessons learned report 4. IoC document

FOUNDATION TOPICS PREPARATION When security incidents occur, the quality of the response is directly related to the amount and the quality of the preparation. Responders should be well prepared and equipped with all the tools they need to provide a robust response. Several key activities must be carried out to ensure this is the case. Training The terms security awareness training, security training, and security education are often used interchangeably, but they are actually three different things. Basically, security awareness training is the what, security training is the how, and security education is the why. Security awareness training reinforces the fact that valuable resources must be protected by implementing security measures. Security training teaches personnel the skills they need to perform their jobs in a secure manner. Organizations often combine security awareness training and security training and label it as “security awareness training” for simplicity; the combined training improves user awareness of security and ensures that users can be held accountable for their

actions. Security education is more independent, targeted at security professionals who require security expertise to act as in-house experts for managing the security programs. Security awareness training should be developed based on the audience. In addition, trainers must understand the corporate culture and how it will affect security. The audiences you need to consider when designing training include high-level management, middle management, technical personnel, and other staff. For high-level management, the security awareness training must provide a clear understanding of potential risks and threats, effects of security issues on organizational reputation and financial standing, and any applicable laws and regulations that pertain to the organization’s security program. Middle management training should discuss policies, standards, baselines, guidelines, and procedures, particularly how these components map to individual departments. Also, middle management must understand their responsibilities regarding security. Technical staff should receive technical training on configuring and maintaining security controls, including how to recognize an attack when it occurs. In addition, technical staff should be encouraged to pursue industry certifications and higher education degrees. Other staff need to understand their responsibilities regarding security so that they perform their day-to-day tasks in a secure manner. With these staff, providing real-world examples to emphasize proper security procedures is effective. Personnel should sign a document that indicates they have completed the training and understand all the topics. Although the initial training should occur when personnel are hired, security awareness training should be considered a continuous process, with future training sessions occurring at least annually.

Testing After incident response processes have been developed as described in Chapter 15, “The Incident Response Process,” responders should test the process to ensure it is effective. In Chapter 20, “Applying Security Concepts in Support of Organizational Risk Mitigation,” you’ll learn about exercises that can be performed that help to test your response to a live attack (red team/blue team/white team exercises and tabletop exercises). The results of tests along with the feedback from live events can help to inform the lessons learned report, described later in this chapter. Documentation of Procedures Incident response procedures should be clearly documented. While many incident response plan templates can be found online (and even the outline of this chapter is organized by one set of procedures), a generally accepted incident response plan is shown in Figure 16-1 and described in the list that follows.

Figure 16-1 Incident Response Process Step 1. Detect: The first step is to detect the incident. The worst sort of incident is one that goes unnoticed. Step 2. Respond: The response to the incident should be appropriate for the type of incident. A denial of service (DoS) attack against a web server would require a quicker and different response than a missing mouse in the server room. Establish standard responses and response times ahead of time.

Step 3. Report: All incidents should be reported within a time frame that reflects the seriousness of the incident. In many cases, establishing a list of incident types and the person to contact when each type of incident occurs is helpful. Attention to detail at this early stage, while time-sensitive information is still available, is critical. Step 4. Recover: Recovery involves a reaction designed to make the network or system affected functional again. Exactly what that means depends on the circumstances and the recovery measures available. For example, if fault-tolerance measures are in place, the recovery might consist of simply allowing one server in a cluster to fail over to another. In other cases, it could mean restoring the server from a recent backup. The main goal of this step is to make all resources available again. Step 5. Remediate: This step involves eliminating any residual danger or damage to the network that still might exist. For example, in the case of a virus outbreak, it could mean scanning all systems to root out any additional affected machines. These measures are designed to make a more detailed mitigation when time allows. Step 6. Review: Finally, you need to review each incident to discover what can be learned from it. Changes to procedures might be called for. Share lessons learned with all personnel who might encounter the same type of incident again. Complete documentation and analysis are the goals of this step. The actual investigation of an incident occurs during the respond, report, and recover steps. Following appropriate forensic and digital investigation processes during an investigation can ensure that evidence is preserved.

Your responses will benefit from using standard forms that prompt for the collection of all relevant information that can lead to a better and more consistent response process over time. Some examples of commonly used forms are as follows:

Incident form: This form is used to describe the incident in detail. It should include sections to record complementary metal oxide semiconductor (CMOS), hard drive information, image archive details, analysis platform information, and other details. The best approach is to obtain a template and customize it to your needs. Call list/escalation list: First responders to an incident should have contact information for all individuals who might need to be alerted during the investigation. This list should also indicate under what circumstance these individuals should be contacted to avoid unnecessary alerts and to keep the process moving in an organized manner.

DETECTION AND ANALYSIS Once evidence from an incident has been collected, it must be analyzed and classified as to its severity so that more critical incidents can be dealt with first and less critical incidents later. Characteristics Contributing to Severity Level Classification To properly prioritize incidents, each must be classified with respect to the scope of the incident and the types of data that have been put at risk. Scope is more than just how widespread the incident is, and the types of data classifications may be more varied than you expect. The following sections discuss the factors that contribute to incident severity and prioritization. The scope determines the impact and is a function of how widespread the incident is and the potential economic and intangible impacts it could have on the business. Five common

factors are used to measure scope. They are covered in the following sections. Downtime and Recovery Time One of the issues that must be considered is the potential amount of downtime the incident could inflict and the time it will take to recover from the incident. If a proper business continuity plan (BCP) has been created, you will have collected information about each asset that will help classify incidents that affect each asset. As part of determining how critical an asset is, you need to understand the following terms:

Maximum tolerable downtime (MTD): This is the maximum amount of time that an organization can tolerate a single resource or function being down. This is also referred to as maximum period time of disruption (MPTD). Mean time to repair (MTTR): This is the average time required to repair a single resource or function when a disaster or disruption occurs. Mean time between failures (MTBF): This is the estimated amount of time a device will operate before a failure occurs. This amount is calculated by the device vendor. System reliability is increased by a higher MTBF and lower MTTR. Recovery time objective (RTO): This is the shortest time period after a disaster or disruptive event within which a resource or function must be restored in order to avoid unacceptable consequences. RTO assumes that an acceptable period of downtime exists. RTO should be smaller than MTD. Work recovery time (WRT): This is the difference between RTO and MTD, which is the remaining time that is left over after the RTO before reaching the MTD. Recovery point objective (RPO): This is the point in time to which the disrupted resource or function must be returned.

Each organization must develop its own documented criticality levels. The following is a good example of organizational resource and function criticality levels:

Critical: Critical resources are those resources that are most vital to the organization’s operation and should be restored within minutes or hours of the disaster or disruptive event. Urgent: Urgent resources should be restored within 24 hours but are not considered as important as critical resources. Important: Important resources should be restored within 72 hours but are not considered as important as critical or urgent resources. Normal: Normal resources should be restored within 7 days but are not considered as important as critical, urgent, or important resources. Nonessential: Nonessential resources should be restored within 30 days.

Data Integrity Data integrity refers to the correctness, completeness, and soundness of the data. One of the goals of integrity services is to protect the integrity of data or at least to provide a means of discovering when data has been corrupted or has undergone an unauthorized change. One of the challenges with data integrity attacks is that the effects might not be detected for years—until there is a reason to question the data. Identifying the compromise of data integrity can be made easier by using filehashing algorithms and tools to check seldom-used but sensitive files for unauthorized changes after an incident. These tools can be run to quickly identify files that have been altered. They can help you get a better assessment of the scope of the data corruption. Economic

The economic impact of an incident is driven mainly by the value of the assets involved. Determining those values can be difficult, especially for intangible assets such as plans, designs, and recipes. Tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. The value of an asset should be considered with respect to the asset owner’s view. The following considerations can be used to determine an asset’s value:

Value to owner Work required to develop or obtain the asset Costs to maintain the asset Damage that would result if the asset were lost Cost that competitors would pay for the asset Penalties that would result if the asset were lost

After determining the value of assets, you should determine the vulnerabilities and threats to each asset. System Process Criticality Some assets are not actually information but systems that provide access to information. When these system or groups of systems provide access to data required to continue to do business, they are called critical systems. While it is somewhat simpler to arrive at a value for physical assets such as servers, routers, switches, and other devices, in cases where these systems provide access to critical data or are required to continue a business-critical process, their value is more than the replacement cost of the hardware. The assigned value should be increased to reflect its importance in providing access to data or its role in continuing a critical process.

Reverse Engineering Reverse engineering can refer to retracing the steps in an incident, as seen from the logs in the affected devices or in logs of infrastructure devices that may have been involved in transferring information to and from the devices. This can help you understand the sequence of events. When unknown malware is involved, the term reverse engineering may refer to an analysis of the malware’s actions to determine a removal technique. This is the approach to zero-day attacks in which no known fix is yet available from anti-malware vendors. With respect to reverse engineering malware, this process refers to extracting the code from the binary executable to identify how it was programmed and what it does. There are three ways the binary malware file can be made readable:

Disassembly: This refers to reading the machine code into memory and then outputting each instruction as a text string. Analyzing this output requires a very high level of skill and special software tools. Decompiling: This process attempts to reconstruct the high-level language source code. Debugging: This process steps though the code interactively. There are two kinds of debuggers: Kernel debugger: This type of debugger operates at ring 0 (essentially the driver level) and has direct access to the kernel. Usermode debugger: This type of debugger has access to only the usermode space of the operating system. Most of the time, this is enough, but not always. In the case of rootkits or even super-advanced protection schemes, it is preferable to step into a kernel mode debugger instead because usermode in such situations is untrustworthy.

Data Correlation

Data correlation is the process of locating variables in the information that seem to be related. For example, say that every time there is a spike in SYN packets, you seem to have a DoS attack. When you apply these processes to the data in security logs of devices, it helps you identify correlations that help you identify issues and attacks. A good example of such a system is a security information event management (SIEM) system. These systems collect the logs, analyze the logs, and, through the use of aggregation and correlation, help you identify attacks and trends. SIEM systems are covered in more detail in Chapter 11, “Analyzing Data as Part of Security Monitoring Activities.”

CONTAINMENT Just as the first step when an injury occurs is to stop the bleeding, after a security incident occurs, the first priority is to contain the threat to minimize the damage. There are a number of containment techniques. Not all of them are available to you or advisable in all situations. One of the benefits of proper containment is that it gives you time to develop a good remediation strategy. Segmentation The segmentation process involves limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments. These segments could be defined at either Layer 3 or Layer 2 of the OSI reference model. When you segment at Layer 3, you are creating barriers based on IP subnets. These are either physical LANs or VLANs. Creating barriers at this level involves deploying access control lists (ACLs) on the routers to prevent traffic from moving from one subnet to another. While it is possible to simply shut down a router interface, in some scenarios that is not advisable

because the interface is used to reach more subnets than the one where the threat exists. Segmenting at Layer 2 can be done in several ways:

You can create VLANs, which create segmentation at both Layer 2 and Layer 3. You can create private VLANs (PVLANs), which segment an existing VLAN at Layer 2. You can use port security to isolate a device at Layer 2 without removing it from the network.

In some cases, it might be advisable to use segmentation at the perimeter of the network (for example, stopping the outbound communication from an infected machine or blocking inbound traffic). Isolation Isolation typically is implemented by either blocking all traffic to and from a device or devices or by shutting down device interfaces. This approach works well for a single compromised system but becomes cumbersome when multiple devices are involved. In that case, segmentation may be a more advisable approach. If a new device can be set up to perform the role of the compromised device, the team may leave the device running to analyze the end result of the threat on the isolated host. Another form of isolation, process isolation is a technique whereby all processes (work being performed by the processor) are executed using memory dedicated to each process. This prevents processes from accessing the memory of other processes, which can help to mitigate attacks that do so.

ERADICATION AND RECOVERY

After the threat has been contained, the next step is to remove or eradicate the threat. In some cases the compromised device can be cleaned without a format of the hard drive, while in many other cases this must be done to completely remove the threat. This section looks at some removal approaches. Vulnerability Mitigation Once the specific vulnerability has been identified, it must be mitigated. This mitigation will in large part be driven by the type of issue with which you are presented. In some cases the proper response will be to format the hard drive of the affected system and reimage it. In other cases the mitigation may be a change in policies, when a weakness is revealed that results from the way the organization operates. Let’s look at some common mitigations. Sanitization Sanitization refers to removing all traces of a threat by overwriting the drive multiple times to ensure that the threat is removed. This works well for mechanical hard disk drives, but solid-state drives present a challenge in that they cannot be overwritten. Most solid-state drive vendors provide sanitization commands that can be used to erase the data on the drive. Security professionals should research these commands to ensure that they are effective. Note NIST Special Publication 800-88 Rev. 1 is an example of a government guideline for proper media sanitization, as are the IRS guidelines for proper media sanitization: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.80088r1.pdf https://www.irs.gov/privacy-disclosure/media-sanitization-guidelines

Reconstruction/Reimaging

Once a device has been sanitized, the system must be rebuilt. This can be done by reinstalling the operating system, applying all system updates, reinstalling the anti-malware software, and implementing any organization security settings. Then, any needed applications must be installed and configured. If the device is a server that is running some service on behalf of the network (for example, DNS or DHCP), that service must be reconfigured as well. All this is not only a lot of work, it is timeconsuming. A better approach is to maintain standard images of the various device types in the network so that you can use these images to stand up a device quickly. To make this approach even more seamless, having a backup image of the same device eliminates the need to reconfigure everything you might have to reconfigure when using standard images. Secure Disposal In some instances, you may decide to dispose of a compromised device (or its storage drive) rather than attempt to sanitize and reuse the device. In that case, you want to dispose of it in a secure manner. In the case of secure disposal, an organization must consider certain issues, including the following:

Does removal or replacement introduce any security holes in the network? How can the system be terminated in an orderly fashion to avoid disrupting business continuity? How should any residual data left on any systems be removed? Are there any legal or regulatory issues that would guide the destruction of data?

Whenever data is erased or removed from storage media, residual data can be left behind. This can allow data to be reconstructed when the organization disposes of the media, and

unauthorized individuals or groups may be able to gain access to the data. When considering data remanence, security professionals must understand three countermeasures:

Clearing: Clearing includes removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools. With this method, the data is recoverable only using special forensic techniques. Purging: Also referred to as sanitization, purging makes the data unreadable even with advanced forensic techniques. With this technique, data should be unrecoverable. Destruction: Destruction involves destroying the media on which the data resides. Degaussing, another destruction technique, exposes the media to a powerful, alternating magnetic field, removing any previously written data and leaving the media in a magnetically randomized (blank) state. Physical destruction involves physically breaking the media apart or chemically altering it.

Patching In many cases, a threat or an attack is made possible by missing security patches. You should update or at least check for updates for a variety of components. This includes all patches for the operating system, updates for any applications that are running, and updates to all anti-malware software that is installed. While you are at it, check for any firmware update the device may require. This is especially true of hardware security devices such as firewalls, IDSs, and IPSs. If any routers or switches are compromised, check for software and firmware updates. Restoration of Permissions Many times an attacker compromises a device by altering the permissions, either in the local database or in entries related to the device in the directory service server. All permissions should

undergo a review to ensure that all are in the appropriate state. The appropriate state may not be the state they were in before the event. Sometimes you may discover that although permissions were not set in a dangerous way prior to an event, they are not correct. Make sure to check the configuration database to ensure that settings match prescribed settings. You should also make changes to the permissions based on lessons learned during an event. In that case, ensure that the new settings undergo a change control review and that any approved changes are reflected in the configuration database. Reconstitution of Resources In many incidents, resources may be deleted or stolen. In other cases, the process of sanitizing the device causes the loss of information resources. These resources should be recovered from backup. One key process that can minimize data loss is to shorten the time between backups for critical resources. This results in a recovery point objective (RPO) that includes more recent data. RPO is discussed in more detail earlier in this chapter. Restoration of Capabilities and Services During the incident response, it might be necessary to disrupt some of the normal business processes to help contain the issue or to assist in remediation. It is also possible that the attack has rendered some services and capabilities unavailable. Once an effective response has been mounted, these systems and services must be restored to full functionality. As shortening the backup time can help to reduce the effects of data loss, faulttolerant measures can be effective in preventing the loss of critical services. Verification of Logging/Communication to Security Monitoring

To ensure that you will have good security data going forward, you need to ensure that all logs related to security are collecting data. Pay special attention to the manner in which the logs react when full. With some settings, the log begins to overwrite older entries with new entries. With other settings, the service stops collecting events when the log is full. Security log entries need to be preserved. This may require manual archiving of the logs and subsequent clearing of the logs. Some logs make this possible automatically, whereas others require a script. If all else fails, check the log often to assess its state. Many organizations send all security logs to a central location. This could be a Syslog server, or it could be a SIEM system. These systems not only collect all the logs, they use the information to make inferences about possible attacks. Having access to all logs allows the system to correlate all the data from all responding devices. Regardless of whether you are logging to a Syslog server or a SIEM system, you should verify that all communications between the devices and the central server are occurring without a hitch. This is especially true if you had to rebuild the system manually rather than restore from an image, as there would be more opportunity for human error in the rebuilding of the device.

POST-INCIDENT ACTIVITIES Once the incident has been contained and removed and the recovery process is complete, there is still work to be done. Much of it, as you might expect, is paperwork, but this paperwork is critical to enhancing the response to the next incident. Let’s look at some of these post-incident activities that should take place. Evidence Retention

If the incident involved a security breach and the incident response process gathered evidence to prove an illegal act or a violation of policy, the evidence must be stored securely until it is presented in court or is used to confront the violating employee. Computer investigations require different procedures than regular investigations because the time frame for the computer investigator is compressed, and an expert might be required to assist in the investigation. Also, computer information is intangible and often requires extra care to ensure that the data is retained in its original format. Finally, the evidence in a computer crime is difficult to gather. After a decision has been made to investigate a computer crime, you should follow standardized procedures, including the following: Identify what type of system is to be seized. Identify the search and seizure team members. Determine the risk of the suspect destroying evidence.

After law enforcement has been informed of a computer crime, the constraints on the organization’s investigator are increased. Turning over an investigation to law enforcement to ensure that evidence is preserved properly might be necessary. When investigating a computer crime, evidentiary rules must be addressed. Computer evidence should prove a fact that is material to the case and must be reliable. The chain of custody must be maintained. Computer evidence is less likely to be admitted in court as evidence if the process for producing it is not documented. Lessons Learned Report The first document that should be drafted is a lessons learned report, which briefly lists and discusses what was

learned about how and why the incident occurred and how to prevent it from occurring again. This report should be compiled during a formal meeting shortly after recovery from the incident. This report provides valuable information that can be used to drive improvement in the security posture of the organization. This report might answer questions such as the following:

What went right, and what went wrong? How can we improve? What needs to be changed? What was the cost of the incident?

Change Control Process The lessons learned report may generate a number of changes that should be made to the network infrastructure. All these changes, regardless of how necessary they are, should go through the standard change control process. They should be submitted to the change control board, examined for unforeseen consequences, and studied for proper integration into the current environment. Only after gaining approval should they be implemented. You may find it helpful to create a “fast track” for assessment in your change management system for changes such as these when time is of the essence. For more details regarding change control processes, refer to Chapter 8, “Security Solutions for Infrastructure Management.” Incident Response Plan Update The lessons learned exercise may also uncover flaws in your IR plan. If this is the case, you should update the plan appropriately to reflect the needed procedure changes. When this is complete, ensure that all software and hard copy versions

of the plan have been updated so everyone is working from the same document when the next event occurs. Incident Summary Report All stakeholders should receive a document that summarizes the incident. It should not have an excessive amount of highly technical language in it, and it should be written so nontechnical readers can understand the major points of the incident. The following are some of the highlights that should be included in an incident summary report:

When the problem was first detected and by whom The scope of the incident How it was contained and eradicated Work performed during recovery Areas where the response was effective Areas that need improvement

Indicator of Compromise (IoC) Generation Indicators of compromise (IoCs) are behaviors and activities that precede or accompany a security incident. In Chapter 17, “Analyzing Potential Indicators of Compromise,” you will learn what some of these indicators are and what they may tell you. You should always record or generate the IOCs that you find related to the incident. This information may be used to detect the same sort of incident later, before it advances to the point of a breach. Monitoring As previously discussed, it is important to ensure that all security surveillance tools (IDS, IPS, SIEM, firewalls) are back online and recording activities and reporting as they should be,

as discussed in Chapter 11. Moreover, even after you have taken all steps described thus far, consider using a vulnerability scanner to scan the devices or the network of devices that were affected. Make sure before you do so that you have updated the scanner so it can recognize the latest vulnerabilities and threats. This will help catch any lingering vulnerabilities that may still be present.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 16-2 lists a reference of these key topics and the page numbers on which each is found.

Table 16-2 Key Topics for Chapter 16

Key Topic Element

Description

Page Number

Figure 16-1

Incident response process

453

Bulleted list

Key incident forms

454

Bulleted list

Recovery terminology

455

Bulleted list

Criticality levels

456

Bulleted list

Asset value considerations

456

Bulleted list

Reverse engineering techniques

457

Bulleted list

Segmenting at Layer 2

459

Bulleted list

Disposal considerations

460

Bulleted list

Data removal methods

461

Bulleted list

Lesson learned considerations

464

Bulleted list

Incident summary report considerations

464

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: incident form call list/escalation list scope maximum tolerable downtime (MTD) mean time to repair (MTTR) mean time between failures (MTBF) recovery time objective (RTO) work recovery time (WRT) recovery point objective (RPO) reverse engineering disassembly decompiling debugging

kernel debugger usermode debugger data correlation segmentation isolation sanitization clearing purging destruction lessons learned report incident summary report indicator of compromise (IoC)

REVIEW QUESTIONS 1. When security incidents occur, the quality of the response is directly related to the amount of and quality of the ____________. 2. List the steps, in order, of the incident response process. 3. Match the following terms with their definitions.

Terms

Definitions

Maximu m tolerable downtim e (MTD)

The estimated amount of time a device will operate before a failure occurs

Mean time to repair (MTTR)

The shortest time period after a disaster or disruptive event within which a resource or function must be restored in order to avoid unacceptable consequences

Mean time between failures (MTBF)

The maximum amount of time that an organization can tolerate a single resource or function being down

Recovery time objective (RTO)

The average time required to repair a single resource or function

4. ____________________ involves eliminating any residual danger or damage to the network that still might exist. 5. List at least two considerations that can be used to determine an asset’s value. 6. Match the following terms with their definitions.

Term s

Definitions

Se gm ent ati on

Making the data unreadable even with advanced forensic techniques

Sa niti zat ion

Removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools

Cle ari ng

Removing all traces of a threat by overwriting the drive multiple times

Pu

Limiting the scope of an incident by leveraging existing

rgi ng

segments of the network as barriers to prevent the spread to other segments

7. The _______________________ should indicate under what circumstance individuals should be contacted to avoid unnecessary alerts and to keep the process moving in an organized manner. 8. List at least one way the binary malware file can be made readable. 9. Match the following terms with their definitions.

Terms

Definitions

Disassem bly

Retracing the steps in an incident, as seen from the log

Reverse engineeri ng

Process that attempts to reconstruct the high-level language source code

Debuggin g

Steps though the code interactively

Decompil ing

Reading the machine code into memory and then outputting each instruction as a text string

10. ______________________ are behaviors and activities that precede or accompany a security incident.

Chapter 17

Analyzing Potential Indicators of Compromise This chapter covers the following topics related to Objective 4.3 (Given an incident, analyze potential indicators of compromise) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Network-related indicators of compromise: Includes bandwidth consumption, beaconing, irregular peer-to-peer communication, rogue device on the network, scan/sweep, unusual traffic spike, and common protocol over non-standard port. Host-related indicators of compromise: Covers processor consumption, memory consumption, drive capacity consumption, unauthorized software, malicious process, unauthorized change, unauthorized privilege, data exfiltration, abnormal OS process behavior, file system change or anomaly, registry change or anomaly, and unauthorized scheduled task. Application-related indicators of compromise: Includes anomalous activity, introduction of new accounts, unexpected output, unexpected outbound communication, service interruption, and application log.

Indicators of compromise (IOCs) are somewhat like clues left at the scene of a crime except they also include clues that preceded the crime. IOCs help us to anticipate security issues and also to reconstruct the process that was taken to cause the security issue or breach. This chapter examines some common IOCs and what they might indicate.

“DO I KNOW THIS ALREADY?” QUIZ

The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 17-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 17-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questions

Network-Related Indicators of Compromise

1, 2, 3

Host-Related Indicators of Compromise

4, 5, 6

Application-Related Indicators of Compromise

7, 8, 9

1. Which of the following IoCs is most likely from a DoS attack? 1. Beaconing 2. Irregular peer-to-peer communication 3. Bandwidth consumption 4. Rogue device on the network

2. Which of the following IoCs is most likely an indication of a botnet? 1. Beaconing

2. Irregular peer-to-peer communication 3. Bandwidth consumption 4. Rogue device on the network

3. Which of the following is used to locate live devices? 1. Ping sweep 2. Port scan 3. Pen test 4. Vulnerability test

4. Which of the following metrics cannot be found in Windows Task Manager? 1. Memory consumption 2. Drive capacity consumption 3. Processor consumption 4. Unauthorized software

5. Which of the following utilities is a freeware task manager that offers more functionality than Windows Task Manager? 1. System Information 2. Process Explorer 3. Control Panel 4. Performance

6. Which of the following is a utility built into the Windows 10 operating system that checks for system file corruption? 1. TripWire 2. System File Checker 3. sigver 4. SIEM

7. Which of the following might be an indication of a backdoor? 1. Introduction of new accounts 2. Unexpected output 3. Unexpected outbound communication 4. Anomalous activity

8. Within which of the following tools is the Application log found? 1. Event Viewer 2. Performance 3. System Information 4. App Locker

9. Which of the following is not an application-related IoC? 1. Introduction of new accounts 2. Unexpected output 3. Unexpected outbound communication 4. Beaconing

FOUNDATION TOPICS NETWORK-RELATED INDICATORS OF COMPROMISE Security analysts, regardless of whether they are operating in the role of first responder or in a supporting role analyzing issues, should be aware of common indicators of compromise. Moreover, they should be aware of the types of incidents implied by each IOC. This can lead to a quicker and correct choice of action when time is of the essence. It is helpful to

examine these IOCs in relation to the component that is displaying the IOC. Certain types of network activity are potential indicators of security issues. The following sections describe the most common of the many network-related symptoms. Bandwidth Consumption Whenever bandwidth usage is above normal and there is no known legitimate activity generating the traffic, you should suspect security issues that generate unusual amounts of traffic, such as denial-of-service (DoS) or distributed denial-of-service (DDoS) attacks. For this reason, benchmarks should be created for normal bandwidth usage at various times during the day. Then alerts can be set when activity rises by a specified percentage at those various times. Many free network bandwidth monitoring tools are available. Among them are BitMeter OS, FreeMeter Bandwidth Monitor, BandwidthD, and PRTG Network Monitor. Anomaly-based intrusion detection systems can also “learn” normal traffic patterns and can set off alerts when unusual traffic is detected. Figure 17-1 shows an example of setting an alert in BitMeter.

Figure 17-1 Setting an Alert in BitMeter

Beaconing Beaconing refers to traffic that leaves a network at regular intervals. This type of traffic could be generated by compromised hosts that are attempting to communicate with (or call home) the malicious party that compromised the host. While there are security products that can identify beacons, including firewalls, intrusion detection systems, web proxies, and SIEM systems, creating and maintaining baselines of activity will help you identify beacons that are occurring during times of no activity (for example, at night). When this type of traffic is detected, you should search the local source device for scripts that may be generating these calls home. Irregular Peer-to-Peer Communication If traffic occurring between peers within a network is normal but communications are irregular, this may be an indication of a security issue. At the very least, illegal file sharing could be occurring, and at the worst, this peer-to-peer (P2P) communication could be the result of a botnet. Peer-to-peer botnets differ from normal botnets in their structure and operation. Figure 17-2 shows the structure of a traditional botnet. In this scenario, all the zombies communicate directly with the command and control server, which is located outside the network. The limitation of this arrangement and the issue that gives rise to peer-to-peer botnets is that devices that are behind a NAT server or proxy server cannot participate. Only devices that can be reached externally can do so.

Figure 17-2 Traditional Botnet In a peer-to-peer botnet, devices that can be reached externally are compromised and run server software that turns them into command and control servers for the devices that are recruited internally that cannot communicate with the command and control server operating externally. Figure 17-3 shows this arrangement.

Figure 17-3 Peer-to-Peer Botnet Regardless of whether peer-to-peer traffic is used as part of a botnet or simply as a method of file sharing, it presents the following security issues:

The spread of malicious code that may be shared along with the file Inadvertent exposure of sensitive material located in unsecured directories Actions taken by the P2P application that make a device more prone to attack, such as opening ports Network DoS attacks created by large downloads Potential liability from pirated intellectual property

Because of the dangers, many organizations choose to prohibit the use of P2P applications and block common port numbers used by these applications at the firewall. Another helpful

remediation is to keep all anti-malware software up to date in case malware is transmitted by the use of P2P applications. Rogue Device on the Network Any time new devices appear on a network, there should be cause for suspicion. While it is possible that users may be introducing these devices innocently, there are also a number of bad reasons for these devices to be on the network. The following types of illegitimate devices may be found on a network:

Wireless key loggers: These collect information and transmit it to the criminal via Bluetooth or Wi-Fi. Wi-Fi and Bluetooth hacking gear: This gear is designed to capture both Bluetooth and Wi-Fi transmissions. Rogue access points: Rogue APs are designed to lure your hosts into a connection for a peer-to-peer attack. Rogue switches: These switches can attempt to create a trunk link with a legitimate switch, thus providing access to all VLANs. Mobile hacking gear: This gear allows a malicious individual to use software along with software-defined radios to trick cell phone users into routing connections through a fake cell tower.

The actions required to detect or prevent rogue devices depend on the type of device. With respect to rogue switches, ensure that all ports that are required to be trunks are “hard coded” as trunks and that Dynamic Trunking Protocol (DTP) is disabled on all switch ports. With respect to rogue wireless access points, the best solution is a wireless intrusion prevention system (WIPS). These systems can not only alert you when any unknown device is in the area (APs and stations) but can take a number of actions to prevent security issues, including the following:

Locate a rogue AP by using triangulation when three or more sensors are present Deauthenticate any stations that have connected to an “evil twin” Detect denial-of-service attacks Detect man-in-the-middle and client-impersonation attacks

Some examples of these tools include Mojo Networks AirTight WIPS, HP RFProtect, Cisco Adaptive Wireless IPS, Fluke Networks AirMagnet Enterprise, HP Mobility Security IDS/IPS, and Zebra Technologies AirDefense. Scan/Sweep One of the early steps in a penetration test is to scan or sweep the network. If no known penetration test is underway but a scan or sweep is occurring, it is an indication that a malicious individual may be scanning in preparation for an attack. The following are the most common of these scans:

Ping sweeps: Also known as ICMP sweeps, ping sweeps use ICMP to identify all live hosts by pinging all IP addresses in the known network. All devices that answer are up and running. Port scans: Once all live hosts are identified, a port scan attempts to connect to every port on each device and report which ports are open, or “listening.” Vulnerability scans: Vulnerability scans are more comprehensive than the other types of scans in that they identify open ports and security weaknesses. The good news is that uncredentialed scans expose less information than credentialed scans. An uncredentialed scan is a scan in which the scanner lacks administrative privileges on the device it is scanning.

Unusual Traffic Spike

Any unusual spikes in traffic that are not expected should be cause for alarm. Just as an increase in bandwidth usage may indicate DoS or DDoS activity, unusual spikes in traffic may also indicate this type of activity. Again, know what your traffic patterns are and create a baseline of this traffic rhythm. With traffic spikes, there are usually accompanying symptoms such as network slowness and, potentially, alarms from any IPSs or IDSs you have deployed. Keep in mind that there are other legitimate reasons for traffic spikes. The following are some of the normal activities that can cause these spikes:

Backup traffic in the LAN Virus scanner updates Operating system updates Mail server issues

Common Protocol over Non-standard Port Common protocols such as FTP, SMTP, and SNMP use default port numbers that have been standardized. However, it is possible to run these protocols over different port numbers. Whenever you discover this being done, you should treat the transmission with suspicion because often there is no reason to use a non-standard port unless you are trying to obscure what you are doing. It also is a way of evading ACLs that prevent traffic on the default standard ports. Be aware, though, that running a common protocol over a non-standard port also is used legitimately to prevent DoS attacks on default standard ports by shifting a well-known service to a non-standard port number. So, it is a technique used by both sides.

HOST-RELATED INDICATORS OF COMPROMISE While many indicators of compromise are network related, some are indications that something is wrong at the system or host level. These are behaviors of a single system or host rather than network symptoms. Processor Consumption When the processor is very busy with very little or nothing running to generate the activity, it could be a sign that the processor is working on behalf of malicious software. Processor consumption was covered in Chapter 13, “The Importance of Proactive Threat Hunting.” Memory Consumption Another key indicator of a compromised host is increased memory consumption. Memory consumption was also covered in Chapter 13. Drive Capacity Consumption Available disk space on the host decreasing for no apparent reason is cause for concern. It could be that the host is storing information to be transmitted at a later time. Some malware also causes an increase in drive availability due to deleting files. Finally, in some cases, the purpose is to fill the drive as part of a DoS or DDoS attack. One of the difficult aspects of this is that the drive is typically filled with files that cannot be seen or that are hidden. When users report a sudden filling of their hard drive and even a slow buildup over time that cannot be accounted for, you should scan the device for malware in Safe Mode. Scanning with multiple products is advised as well. Unauthorized Software

The presence of any unauthorized software should be another red flag. If you have invested in a vulnerability scanner, you can use it to create a list of installed software that can be compared to a list of authorized software. Unfortunately, many types of malware do a great job of escaping detection. One of the ways to prevent unauthorized software is through the use of Windows AppLocker. By using this tool, you can create whitelists, which specify the only applications that are allowed, or you can create a blacklist, specifying which applications cannot be run. Figure 17-4 shows a Windows AppLocker rule being created. This particular rule is based on the path to the application, but it could also be based on the publisher of the application or on a hash value of the application file. This particular rule is set to allow the application in the path, but it could also be set to deny that application. Once the policy is created, it can be applied as widely as desired in the Active Directory infrastructure.

Figure 17-4 Create Executable Rules

The following are additional general guidelines for preventing unwanted software: Keep the granting of administrative privileges to a minimum. Audit the presence and use of applications. (AppLocker can do this.)

Malicious Process Malicious programs use processes to access the CPU, just as normal programs do. This means their processes are considered malicious processes. You can sometimes locate processes that are using either CPU or memory by using Task Manager, but again, many malware programs don’t show up in Task Manager. Either Process Explorer or some other tool may give better results than Task Manager. If you locate an offending process and end that process, don’t forget that the program is still there, and you need to locate it and delete all of its associated files and registry entries. Unauthorized Change If an organization has a robust change control process, there should be no unauthorized changes made to devices. Whenever a user reports an unauthorized change in his device, it should be investigated. Many malicious programs make changes that may be apparent to the user. Missing files, modified files, new menu options, strange error messages, and odd system behavior are all indications of unauthorized changes. Unauthorized Privilege Unauthorized changes can be the result of privilege escalation. Check all system accounts for changes to the permissions and rights that should be assigned, paying special attention to new accounts with administrative privileges. When assigning permissions, always exercise the concept of least privilege. Also ensure that account reviews take place on a regular basis to

identify privileges that have been escalated and accounts that are no longer needed. Data Exfiltration Data exfiltration is the theft of data from a device. Any reports of missing or deleted data should be investigated. In some cases, the data may still be present, but it has been copied and transmitted to the attacker. Software tools are available to help track the movement of data in transmissions. Abnormal OS Process Behavior When an operating system is behaving strangely and not operating normally, it could be that the operating system needs to be reinstalled or that it has been compromised by malware in some way. While all operating systems occasionally have issues, persistent issues or issues that are typically not seen or have never been seen could indicate a compromised operating system. File System Change or Anomaly If file systems change, especially system files (those that are part of the operating system), it is not a good sign. System files should not change from the day the operating system was installed, and if they do, it is an indication of malicious activity. Many systems offer the ability to verify the integrity of system files. For example, the System File Checker (SFC) is a utility built into the Windows 10 operating system that will check for and repair operating system file corruption. Registry Change or Anomaly Most registry changes are made through using tools such as Control Panel, and changes are rarely made directly to the registry using the Registry Editor. Changes to registry settings are common when a compromise has occurred. Changes to the registry are not obvious and can remain hidden for long periods

of time. You need tools to help identify infected settings in the registry and to identify the last saved settings. Examples include Microsoft’s Sysinternals Autoruns and Silent Runners.vbs. Unauthorized Scheduled Task In some cases, malware can generate a task that is scheduled to occur on a regular basis, like communicating back to the hacker at certain intervals or copying file locations at certain intervals. Any scheduled task that was not configured by the local team is a sign of compromise. Access to Scheduled Tasks can be controlled through the use of Group Policy.

APPLICATION-RELATED INDICATORS OF COMPROMISE In some cases, symptoms are not present on the network or in the activities of the host operating system, but they are present in the behavior displayed by a compromised application. Some of these indicators are covered in the following sections. Anomalous Activity When an application is behaving strangely and not operating normally, it could be that the application needs to be reinstalled or that it has been compromised by malware in some way. While all applications occasionally have issues, persistent issues or issues that are typically not seen or have never been seen could indicate a compromised application. Introduction of New Accounts Some applications have their own account database. In that case, you may find accounts that didn’t previously exist in the database—and this should be a cause for alarm and investigation. Many application compromises create accounts with administrative access for the use of a malicious individual or for the processes operating on his behalf.

Unexpected Output When the output from a program is not what is normally expected and when dialog boxes are altered or the order in which the boxes are displayed is not correct, it is an indication that the application has been altered. Reports of strange output should be investigated. Unexpected Outbound Communication Any unexpected outbound traffic should be investigated, regardless of whether it was discovered as a result of network monitoring or as a result of monitoring the host or application. With regard to the application, it can mean that data is being transmitted back to the malicious individual. Service Interruption When an application stops functioning with no apparent problem, or when an application cannot seem to communicate in the case of a distributed application, it can be a sign of a compromised application. Any such interruptions that cannot be traced to an application, host, or network failure should be investigated. Application Log Chapter 11, “Analyzing Data as Part of Security Monitoring Activities,” covered the event logs in Windows. One of those event logs is dedicated to errors and issues related to applications, the Application log. This log focuses on the operation of Windows applications. Events in this log are classified as error, warning, or information, depending on the severity of the event. The Application log in Windows 10 is shown in Figure 17-5.

Figure 17-5 Application Log

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 17-2 lists a reference of these key topics and the page numbers on which each is found.

Table 17-2 Key Topics for Chapter 17

Key Topic Element Bulleted list

Description

Dangers of irregular peer-to-peer

Page Number 474

communication Bulleted list

Types of illegitimate devices

475

Bulleted list

Preventing rogue devices

475

Bulleted list

Scan and sweep types

476

Bulleted list

Legitimate reasons for traffic spikes

476

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: indicators of compromise (IoCs) beaconing traditional botnet peer-to-peer botnet wireless key loggers rogue device wireless intrusion prevention system (WIPS) ping sweeps port scans vulnerability scans uncredentialed scan data exfiltration Application log

REVIEW QUESTIONS 1. __________________ refers to traffic that leaves a network at regular intervals. 2. List at least two network-related IoCs.

3. Match the following terms with their definitions.

Terms

Definitions

Beaconing

Behavior that indicates a possible compromise

Data exfiltration

Device you do not control

Rogue device

Data loss through the network

IoC

Traffic that leaves a network at regular intervals

4. The ____________________ focuses on the operation of Windows applications. 5. List at least two host-related IoCs. 6. Match the following terms with their definitions.

Terms

Definitions

Peertopeer botne t

Collects information and transmits it to the criminal via Bluetooth or Wi-Fi

Tradi tiona l botne t

Botnet in which devices that can be reached externally are compromised and run server software that turns them into command and control servers for the devices that are recruited internally that cannot communicate with the command and control server operating externally

Wirel

Not only can alert you when any unknown device is in

ess key logge r

the area (APs and stations) but can take a number of actions

Wirel ess intru sion preve ntion syste m (WIP S)

Botnet in which all the zombies communicate directly with the command and control server, which is located outside the network

7. _________________ enables you to look at the graphs that are similar to Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone. 8. List at least two application-related IoCs. 9. Match the following terms with their definitions.

Terms

Definitions

Ping sweep

Locates vulnerabilities in systems

Port scan

Scanner lacks administrative privileges on the device it is scanning

Vulnerab ility scan

Attempts to connect to every port on each device and report which ports are open, or “listening”

Uncrede ntialed scan

Uses ICMP to identify all live hosts by pinging all IP addresses in the known network

10. A(n) ______________________ is a scan in which the scanner lacks administrative privileges on the device it is scanning.

Chapter 18

Utilizing Basic Digital Forensics Techniques This chapter covers the following topics related to Objective 4.4 (Given a scenario, utilize basic digital forensics techniques) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Network: Covers network protocol analyzing tools including Wireshark and tcpdump. Endpoint: Discusses disk and memory digital forensics. Mobile: Covers mobile forensics techniques. Cloud: Includes forensic techniques in the cloud. Virtualization: Covers issues and forensics unique to virtualization. Legal hold: Describes the legal concept of retaining information for legal purposes. Procedures: Covers forensic procedures. Hashing: Describes forensic verification, including changes to binaries. Carving: Describes the process of carving that allows the recovery of files. Data acquisition: Covers data acquisition processes.

Over time, techniques have been developed to perform a forensic examination of a compromised system or network. Security professionals should use these time-tested processes to guide the approach to gathering digital evidence. This chapter explores many of these basic techniques.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 18-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 18-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Network

1

Endpoint

2

Mobile

3

Cloud

4

Virtualization

5

Legal Hold

6

Procedures

7

Hashing

8

Carving

9

Data Acquisition

10

1. Which of the following is a packet analyzer? 1. Wireshark 2. FTK 3. Helix 4. Cain and Abel

2. Which of the following is a password cracking tool? 1. Wireshark 2. FTK 3. Helix 4. Cain and Abel

3. Which of the following is a data acquisition tool? 1. MD5 2. EnCase 3. Cellebrite 4. dd

4. Which of the following is not true of the cloud-based approach to vulnerability scanning? 1. Installation costs are lower than with a premises-based solution. 2. Maintenance costs are higher than with a premises-based solution. 3. Upgrades are included in a subscription. 4. It does not require the client to provide onsite equipment.

5. Which of the following is false with respect to using forensic tools for the virtual environment?

1. The same tools can be used as in a physical environment. 2. Knowledge of the files that make up a VM is critical. 3. Requires deep knowledge of the log files created by the various components. 4. Requires access to the hypervisor code.

6. Which of the following often requires that organizations maintain archived data for longer periods? 1. Chain of custody 2. Lawful intercept 3. Legal hold 4. Discovery

7. Which of the following items in a digital forensic investigation suite is used to make copies of a hard drive? 1. Imaging utilities 2. Analysis utilities 3. Hashing utilities 4. Password crackers

8. Which of the following is the strongest hashing utility? 1. MD5 2. MD6 3. SHA-1 4. SHA-3

9. Which of the following types of file carving is not supported by Forensic Explorer? 1. Cluster-based file carving 2. Sector-based file carving 3. Byte-based file carving

4. Partition-based file carving

10. Which of the following is a data acquisition tool for smartphones? 1. MD5 2. EnCase 3. Cellebrite 4. dd

FOUNDATION TOPICS NETWORK During both environmental reconnaissance testing and when performing forensic investigations, security analysts have a number of tools at their disposal, and it’s no coincidence that many of them are the same tools that hackers use. The following sections cover the most common network tools and describe the types of information you can determine about the security of the environment by using each tool. Wireshark A packet (or protocol) analyzer can be a standalone device or software running on a laptop computer. One of the most widely used software-based protocol analyzers is Wireshark. It captures raw packets off the interface on which it is configured and allows you to examine each packet. If the data is unencrypted, you can read the data. Figure 18-1 shows an example of Wireshark in use. You can use a protocol analyzer to capture traffic flowing through a network switch by using the port mirroring feature of a switch. You can then examine the captured packets to discern the details of communication flows.

Figure 18-1 Wireshark In the output shown in Figure 18-1, each line represents a packet captured on the network. You can see the source IP address, the destination IP address, the protocol in use, and the information in the packet. For example, line 511 shows a packet from 10.68.26.15 to 10.68.26.127, which is a NetBIOS name resolution query. Line 521 shows an HTTP packet from 10.68.26.46 to a server at 108.160.163.97. Just after that, you can see the server sending an acknowledgment back. To read a packet, you click the single packet. If the data is cleartext, you can read and analyze it. So you can see how an attacker could use Wireshark to acquire credentials and other sensitive information. Protocol analyzers can be of help whenever you need to see what is really happening on your network. For example, say you have a security policy that mandates certain types of traffic should be encrypted, but you are not sure that everyone is complying with this policy. By capturing and viewing the raw packets on the network, you can determine whether users are compliant. Figure 18-2 shows additional output from Wireshark. The top panel shows packets that have been captured. The line numbered 384 has been chosen, and the parts of the packet are shown in the middle pane. In this case, the packet is a response

from a DNS server to a device that queried for a resolution. The bottom pane shows the actual data in the packet and, because this packet is not encrypted, you can see that the user was requesting the IP address for www.cnn.com. Any packet not encrypted can be read in this pane.

Figure 18-2 Analyzing Wireshark Output During environmental reconnaissance testing, you can use packet analyzers to identify traffic that is unencrypted but should be encrypted (as previously mentioned), protocols that should not be in use on the network, and other abnormalities. You can also use these tools to recognize certain types of attacks. Figure 18-3 shows Wireshark output which indicates that a SYN flood attack is underway. Notice the lines highlighted in gray. These are all SYN packets sent to 10.1.0.2, and they are part of a SYN flood. Notice that the target device is answering with RST/ACK packets, which indicates that the port is closed (lines highlighted in red). One of the SYN packets

(highlighted in blue) is open, so you can view its details in the bottom pane. You can expand this pane and read the information from all four layers of the TCP model. Currently the transport layer is expanded.

Figure 18-3 SYN Flood Displayed in Wireshark tcpdump tcpdump is a command-line tool that can capture packets on Linux and Unix platforms. A version for Windows, windump, is available as well. Using it is a matter of selecting the correct parameter to go with the tcpdump command. For example, the following command enables a capture (-i) on the Ethernet 0 interface. tcpdump -i eth0

To explore the many other switches that are available for tcpdump, see www.tcpdump.org/tcpdump_man.html.

ENDPOINT Forensic tools are used in the process of collecting evidence during a cyber investigation. Many of these tools are used to obtain evidence from endpoints. Included in this category are

forensic investigation suites, hashing utilities, password cracking tools, and imaging tools. Disk Many tools are dedicated to retrieving evidence from a hard drive. Others are used to work with the data found on the hard drive. The following tools are all related in some form or fashion to obtaining evidence from a hard drive. FTK Forensic Toolkit (FTK) is a commercial toolkit that can scan a hard drive for all sorts of information. This kit also includes an imaging tool and an MD5 hashing utility. It can locate relevant evidence, such as deleted e-mails. It also includes a password cracker and the ability to work with rainbow tables. For more information on FTK, see https://accessdata.com/products-services/forensic-toolkit-ftk. Helix3 Helix3 comes as a live CD that can be mounted on a host without affecting the data on the host. From the live CD you can acquire evidence and make drive images. This product is sold on a subscription basis by e-fense. For more information on Helix3, see www.efense.com/products.php. Password Cracking In the process of executing a forensic investigation, it may be necessary to crack passwords. Often files have been encrypted or password protected by malicious individuals, and you need to attempt to recover the password. There are many, many password cracking utilities out there; the following are two of the most popular ones:

John the Ripper: John the Ripper is a password cracker that can work in Unix/Linux as well as macOS systems. It detects weak Unix passwords, though it supports hashes for many other platforms as well. John the Ripper is available in three versions: an official free version, a community-enhanced version (with many contributed patches but not as much quality assurance), and an inexpensive pro version. One mitigation for this attack is the Hash Suite for Windows. Cain and Abel: One of the most well-known password cracking programs, Cain and Abel can recover passwords by sniffing the network; crack encrypted passwords using dictionary, brute-force, and cryptanalysis attacks; record VoIP conversations; decode scrambled passwords; reveal password boxes; uncover cached passwords; and analyze routing protocols. Figure 18-4 shows sample output from this tool. As you can see, an array of attacks can be performed on each located account. This example shows a scan of the local machine for user accounts in which the program has located three accounts: Admin, Sharpy, and JSmith. By rightclicking the Admin account, you can use the program to perform a brute-force attack—or a number of other attacks—on that account.

Figure 18-4 Cain and Abel Imaging

Before you perform any analysis on a target disk in an investigation, you should make a bit-level image of the disk so that you can conduct the analysis on that copy. Therefore, a forensic imaging utility should be part of your toolkit. There are many forensic imaging utilities, and many of the forensic investigation suites contain them. Moreover, many commercial forensic workstations have these utilities already loaded. The dd command is a Linux command that is used is to convert and copy files. The U.S. Department of Defense created a fork (a variation) of this command called dcfldd that adds additional forensic functionality. By simply using dd with the proper parameters and the correct syntax, you can make an image of a disk, but dcfldd enables you to also generate a hash of the source disk at the same time. For example, the following command reads 5 GB from the source drive and writes that to a file called mymage.dd.aa: Click here to view code image dcfldd if=/dev/sourcedrive hash=md5,sha256 hashwindow=10G md5log=hashmd5.txt sha256log=hashsha.txt \ hashconv=after bs=512 conv=noerror,sync split=5G splitformat=aa of=myimage.dd

This example also calculates the MD5 hash and the SHA-256 hash of the 5-GB chunk. It then reads the next 5 GB and names that myimage.dd.ab. The MD5 hashes are stored in a file called hashmd5.txt, and the SHA-256 hashes are stored in a file called hashsha.txt. The block size for transferring has been set to 512 bytes, and in the event of read errors, dcfldd writes zeros. Memory Many penetration testing tools perform an operation called a core dump or memory dump. Applications store information in memory, and this information can include sensitive data,

passwords, usernames, and encryption keys. Hackers can use memory-reading tools to analyze the entire memory content used by an application. Any vulnerability testing should take this into consideration and utilize the same tools to identify any issues in the memory of an application. The following are some examples of memory-reading tools:

Memdump: This free tool runs on Windows, Linux, and Solaris. It simply creates a bit-by-bit copy of the volatile memory on a system. KnTTools: This memory acquisition and analysis tool used with Windows systems captures physical memory and stores it to a removable drive or sends it over the network to be archived on a separate machine. FATKit: This popular memory forensic tool automates the process of extracting interesting data from volatile memory. FATKit helps an analyst visualize the objects it finds to help in understanding the data that the tool was able to find.

Runtime debugging, on the other hand, is the process of using a programming tool to not only identify syntactic problems in code but also discover weaknesses that can lead to memory leaks and buffer overflows. Runtime debugging tools operate by examining and monitoring the use of memory. These tools are specific to the language in which the code was written. Table 18-2 shows examples of runtime debugging tools and the operating systems and languages for which they can be used. Table 18-2 Runtime Debugging Tools

Tool

AddressSanitizer

Operating Systems Linux, macOS

Languages

C, C#

Deleaker

Windows (Visual Studio)

C, C#

OutputDebugString Checker by Software Verify

Windows

.NET, C, C##, Java, JavaScript, Lua, Python, Ruby

Memory dumping can help determine what a hacker might be able to learn if she were able to cause a memory dump. Runtime debugging would be the correct approach for discovering syntactic problems in an application’s code or to identify other issues, such as memory leaks or potential buffer overflows.

MOBILE As the use of mobile devices has increased, so has the involvement of these devices in security incidents. The following tools, among others, have been created to help obtain evidence from mobile devices: Cellebrite: Cellebrite has found a niche by focusing on collecting evidence from smartphones. It makes extraction devices that can be used in the field and software that does the same things. These extraction devices collect metadata from memory and attempt to access the file system by bypassing the lock mechanism. They don’t modify any of the data on the devices, which makes this a forensically “clean” solution. The device looks like a tablet, and you simply connect a phone to it via USB. For more information, see https://www.cellebrite.com. Susteen Secure View 4: This mobile forensic tool is used by many police departments. It enables users to fully export and report on all information found on the mobile device. It can create evidence reports based only on the information that you find is relevant to your case. This includes deleted data, all files (pictures, videos, documents, etc.), messages, and more. See https://www.secureview.us/ for details.

MSAB XRY: This digital forensics and mobile device forensics product by the Swedish company MSAB is used to analyze and recover information from mobile devices such as mobile phones, smartphones, GPS navigation tools, and tablet computers. Check out XRY at https://www.msab.com/products/xry/.

CLOUD In Chapter 4, “Analyzing Assessment Output,” you learned about some cloud tools for vulnerability assessments, and in Chapter 8, “Security Solutions for Infrastructure Management,” you learned about cloud anti-malware systems. Let’s look a bit more at cloud vulnerability scanning. Cloud-based vulnerability scanning is a service performed from the vendor’s cloud and is a good example of Software as a Service (SaaS). The benefits here are the same as the benefits derived from any SaaS offering—that is, no equipment on the part of the subscriber and no footprint in the local network. Figure 18-5 shows a premises-based approach to vulnerability scanning, and Figure 18-6 shows a cloud-based solution. In the premises-based approach, the hardware and/or software vulnerability scanners and associated components are entirely installed on the client premises, while in the cloud-based approach, the vulnerability management platform is in the cloud. Vulnerability scanners for external vulnerability assessments are located at the solution provider’s site, with additional scanners on the premises.

Figure 18-5 Premises-Based Scanning

FIGURE 18-6 Cloud-Based Scanning

The following are the advantages of the cloud-based approach:

Installation costs are low because there is no installation and configuration for the client to complete. Maintenance costs are low because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). Upgrades are included in a subscription. Costs are distributed among all customers. It does not require the client to provide onsite equipment.

However, there is a considerable disadvantage to the cloudbased approach: Whereas premises-based deployments store data findings at the organization’s site, in a cloud-based deployment, the data is resident with the provider. This means the customer is dependent on the provider to ensure the security of the vulnerability data. Qualys is an example of a cloud-based vulnerability scanner. Sensors are placed throughout the network, and they upload data to the cloud for analysis. Sensors can be implemented as dedicated appliances or as software instances on a host. A third option is to deploy sensors as images on virtual machines.

VIRTUALIZATION In Chapter 8, you learned the basics of virtualization and how this technology is used in the cloud environment. With respect to forensic tools for the virtual environment, the same tools can be used as in a physical environment. However, the key is knowledge of the files that make up a VM and how to locate these files. Each virtualization system has its own filenames and architecture. Each VM is made up of several files.

Another key aspect of successful forensics in the virtual environment is deep knowledge of the log files created by the various components such as the hypervisor and the guest machine. You need to know not only where these files are located but also the purpose of each and how to read and interpret its entries.

LEGAL HOLD Legal holds are requirements placed on organizations by legal authorities that require the organization to maintain archived data for longer periods. Data on a legal hold must be properly identified, and the appropriate security controls must be put into place to ensure that the data cannot be tampered with or deleted. An organization should have policies regarding any legal holds that may be in place. Consider the following scenario: An administrator receives a notification from the legal department that an investigation is being performed on members of the research department, and the legal department has advised a legal hold on all documents for an unspecified period of time. Most likely this legal hold will violate the organization’s data storage policy and data retention policy. If a situation like this arises, the IT staff should take time to document the decision and ensure that the appropriate steps are taken to ensure that the data is retained and stored for a longer period, if needed.

PROCEDURES In Chapter 16, “Applying the Appropriate Incident Response Procedure,” you learned about the incident response process and its steps. Review those steps as they are important. This section introduces some case management tools that can make the process go smoother. EnCase Forensic

EnCase Forensic is a case (incident) management tool that offers built-in templates for specific types of investigations. These templates are based on workflows, which are the steps to carry out based on the investigation type. A workflow leads you through the steps of triage, collection, decryption, processing, investigation, and reporting of an incident. For more information, see https://www.guidancesoftware.com/encaseforensic. Sysinternals Sysinternals is a Windows command-line tool that contains more than 70 tools that can be used for both troubleshooting and security issues. Among these are forensic tools. For more information, see https://technet.microsoft.com/enus/sysinternals/. Forensic Investigation Suite A forensic investigation suite is a collection of tools that are commonly used in digital forensic investigations. A quality forensic investigation suite should include the following items:

Imaging utilities: One of the tasks you will be performing is making copies of storage devices. For this you need a disk imaging tool. To make system images, you need to use a tool that creates a bit-level copy of the system. In most cases, you must isolate the system and remove it from production to create this bit-level copy. You should ensure that two copies of the image are retained. One copy of the image will be stored to ensure that an undamaged, accurate copy is available as evidence. The other copy will be used during the examination and analysis steps. Message digests (or hashing digests) should be used to ensure data integrity. Analysis utilities: You need a tool to analyze the bit-level copy of the system that is created by the imaging utility. Many of these tools are available on the market. Often these tools are included in

forensic investigation suites and toolkits, such as the previously introduced EnCase Forensic, FTK, and Helix. Chain of custody: While hard copies of chain of custody activities should be kept, some forensic investigation suites contain software to help manage this process. These tools can help you maintain an accurate and legal chain of custody for all evidence, with or without hard copy (paper) backup. Some suites perform a dual electronic signature capture that places both signatures in an Excel spreadsheet as proof of transfer. Those signatures are doubly encrypted so that if the spreadsheet is altered in any way, the signatures disappear. Hashing utilities: These utilities are covered in the next section. OS and process analysis: These tools focus on the activities of the operating system and the processes that have been executed. While most operating systems have tools of some sort that can report on processes, tools included in a forensic investigation suite have more robust features and capabilities. Mobile device forensics: Today, many incidents involve mobile devices. You need different tools to acquire the required information from these devices. A forensic investigation suite should contain tools for this purpose. See the earlier “Mobile” section for examples. Password crackers: Many times investigators find passwords standing in the way of obtaining evidence. Password cracking utilities are required in such instances. Most forensic investigation suites include several password cracking utilities for this purpose. Chapter 4 lists some of these tools. Cryptography tools: An investigator uses these tools when they encounter encrypted evidence, which is becoming more common. Some of these tools can attempt to decrypt the most common types of encryption (for example, BitLocker, BitLocker To Go, PGP, TrueCrypt), and they may also be able to locate decryption keys from RAM dumps and hibernation files. Log viewers: Finally, because much evidence can be found in the logs located on the device, a robust log reading utility is also valuable. A log viewer should have the ability to read all Windows logs as well as the registry. Moreover, it should also be able to read logs created by other operating systems. See the “Log Review” section of Chapter 11, “Analyzing Data as Part of Security Monitoring Activities.”

HASHING A hash function takes a message of variable length and produces a fixed-length hash value. Hash values, also referred to as message digests, are calculated using the original message. If the receiver calculates a hash value that is the same, the original message is intact. If the receiver calculates a hash value that is different, then the original message has been altered. Hashing was covered in Chapter 8. Hashing Utilities You must be able to prove that certain evidence has not been altered during your possession of it. Hashing utilities use hashing algorithms to create a value that can be used later to verify that the information is unchanged. The two most common algorithms used are Message Digest 5 (MD5) and Secure Hashing Algorithm (SHA). Changes to Binaries A binary file is a computer file that is not a text file. The term “binary file” is often used as a term meaning “non-text file.” These files must be interpreted to be read. Executable files are often of this type. These file types can be verified using hashing in the same manner as described in the prior section.

CARVING Data carving is a technique used when only fragments of data are available and when no file system metadata is available. It is a common procedure when performing data recovery, after a storage device failure, for instance. It is also used in forensics. A file signature is a constant numerical or text value used to identify a file format. The object of carving is to identify the file based on this signature information alone.

Forensic Explorer is a tool for the analysis of electronic evidence and incudes a data carving tool that searches for signatures. It offers carving support for more than 300 file types. It supports Cluster-based file carving Sector-based file carving Byte-based file carving

Figure 18-7 shows the File Carving dialog box in Forensic Explorer.

Figure 18-7 File Carving in Forensic Explorer

DATA ACQUISITION Earlier in this chapter, in the section “Forensic Investigation Suite,” you learned about data acquisition tools that should be a

part of your forensic toolkit. Please review that section with regard to forensic tools.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 18-3 lists a reference of these key topics and the page numbers on which each is found.

Table 18-3 Key Topics in Chapter 18

Key Topic Element

Description

Page Number

Figure 182

Analyzing Wireshark output

489

Bulleted list

Examples of memory-reading tools

493

Figure 185

Premises-based scanning

495

Figure 186

Cloud-based scanning

496

Bulleted list

Advantages of the cloud-based approach

496

Bulleted list

Tools commonly included in a forensic investigation suite

498

Section

Description of the data carving forensic technique

500

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: Wireshark tcpdump Forensic Toolkit (FTK) Helix John the Ripper Cain and Abel imaging dd Memdump KnTTools FATKit runtime debugging Cellebrite Qualys legal hold EnCase Forensic Sysinternals forensic investigation suite carving

REVIEW QUESTIONS 1. ___________________ is a command-line tool that can capture packets on Linux and Unix platforms. 2. List at least one password cracking utility. 3. Match the following terms with their definitions.

Ter ms

Definitions

Le gal ho ld

Forensic technique used when only fragments of data are available and when no file system metadata is available

Ha shi ng

Often requires that organizations maintain archived data for longer periods

Ca rvi ng

A command-line tool that can capture packets on Linux and Unix platforms

tcp du m p

Process used to determine the integrity of files

4. The DoD created a fork (a variation) of the dd command called ___________ that adds additional forensic functionality. 5. List at least two memory-reading tools. 6. Match the following terms with their definitions.

Terms

Definitions

Forensic Toolkit (FTK)

Live CD with which you can acquire evidence and make drive images

Helix

Linux command that is used is to convert and copy files

John the Ripper

A commercial toolkit that can scan a hard drive for all sorts of information

dd

Password cracker that can work in Linux or Unix as well as macOS

7. Cellebrite found a niche by focusing on collecting evidence from ______________. 8. List at least two advantages of the cloud-based approach to vulnerability scanning. 9. Match the following terms with their definitions.

Ter ms

Definitions

M e m du m p

Memory acquisition and analysis tool used with Windows systems

K nT To ol s

A cloud-based vulnerability scanner

F A

Memory forensic tool that automates the process of extracting interesting data from volatile memory

T Ki t Q ua lys

Free tool that runs on Windows, Linux, and Solaris and simply creates a bit-by-bit copy of the volatile memory on a system

10. _____________ often require that organizations maintain archived data for longer periods.

Chapter 19

The Importance of Data Privacy and Protection This chapter covers the following topics related to Objective 5.1 (Understand the importance of data privacy and protection) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Privacy vs. security: Compares these two concepts as they relate to data privacy and protection. Non-technical controls: Describes classification, ownership, retention, data types, retention standards, confidentiality, legal requirements, data sovereignty, data minimization, purpose limitation, and non-disclosure agreement (NDA). Technical controls: Covers encryption, data loss prevention (DLP), data masking, deidentification, tokenization, digital rights management (DRM), geographic access requirements, and access controls.

Addressing data privacy and protection issues has become one of the biggest challenges facing organizations that handle the information of employees, customers, and vendors. This chapter explores those data privacy and protection issues and describes the various controls that can be applied to mitigate them. New data privacy laws are being enacted regularly, such as the EU GDPR, that require new controls to protect data.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more

than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks.” Table 19-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 19-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questions

Privacy vs. Security

1, 2, 3

Non-technical Controls

4, 5, 6

Technical Controls

7, 8, 9

1. Which of the following relates to rights to control the sharing and use of one’s personal information? 1. Security 2. Privacy 3. Integrity 4. Confidentiality

2. Which of the following is a risk assessment that determines risks associated with PII collection? 1. MTA 2. PIA 3. RSA

4. SLA

3. Third-party personnel should be familiarized with organizational policies related to data privacy and should sign which of the following? 1. NDA 2. MOU 3. ICA 4. SLA

4. Which of the following is a measure of how freely data can be handled? 1. Sensitivity 2. Privacy 3. Secrecy 4. Criticality

5. Which of the following affects any organizations that handle cardholder information for the major credit card companies? 1. GLBA 2. PCI DSS 3. SOX 4. HIPAA

6. Which of the following affects all healthcare facilities, health insurance companies, and healthcare clearinghouses? 1. GLBA 2. PCI DSS 3. SOX 4. HIPAA

7. Which control provides data confidentiality? 1. Encryption 2. Hashing 3. Redundancy 4. Digital signatures

8. Which control provides data integrity? 1. Encryption 2. Hashing 3. Redundancy 4. Digital signatures

9. Which of the following means altering data from its original state to protect it? 1. Deidentification 2. Data masking 3. DLP 4. Digital signatures

FOUNDATION TOPICS PRIVACY VS. SECURITY Privacy relates to rights to control the sharing and use of one’s personal information, commonly called personally identifiable information (PII), as described in Chapter 15, “The Incident Response Process.” Privacy of data relies heavily on the security controls that are in place. While organizations can provide security without ensuring data privacy, data privacy cannot exist without the appropriate security controls. A privacy impact assessment (PIA) is a risk assessment that determines risks associated with PII collection, use, storage, and

transmission. A PIA should determine whether appropriate PII controls and safeguards are implemented to prevent PII disclosure or compromise. The PIA should evaluate personnel, processes, technologies, and devices. Any significant change should result in another PIA review. As part of prevention of privacy policy violations, any contracted third parties that have access to PII should be assessed to ensure that the appropriate controls are in place. In addition, third-party personnel should be familiarized with organizational policies and should sign non-disclosure agreements (NDAs).

NON-TECHNICAL CONTROLS Non-technical controls are implemented without technology and consist of the organization’s policies and procedures for maintaining data privacy and protection. This section describes some of these non-technical controls, which are also sometimes called administrative controls. Non-technical controls are covered in detail in Chapter 3, “Vulnerability Management Activities.” Classification Data classification helps to ensure that appropriate security measures are taken with regard to sensitive data types and is covered in Chapter 13, “The Importance of Proactive Threat Hunting.” Ownership In Chapter 21, “The Importance of Frameworks, Policies, Procedures, and Controls,” you will learn more about policies that act as non-technical controls. One of those policies is the data ownership policy, which is closely related to the data classification policy (covered in Chapter 13). Often, the two policies are combined because, typically, the data owner is

tasked with classifying the data. Therefore, the data ownership policy covers how the owner of each piece of data or each data set is identified. In most cases, the creator of the data is the owner, but some organizations may deem all data created by a department to be owned by the department head. Another way a user may become the owner of data is by introducing into the organization data the user did not create. Perhaps the data was purchased from a third party. In any case, the data ownership policy should outline both how data ownership occurs and the responsibilities of the owner with respect to determining the data classification and identifying those with access to the data. Retention Another policy that acts as a non-technical control is the data retention policy, which outlines how various data types must be retained and may rely on the data classifications described in the data classification policy. Data retention requirements vary based on several factors, including data type, data age, and legal and regulatory requirements. Security professionals must understand where data is stored and the type of data stored. In addition, security professionals should provide guidance on managing and archiving data securely. Therefore, each data retention policy must be established with the help of organizational personnel. A data retention policy usually identifies the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data destruction, the data types covered by the policy, and the retention schedule. Security professionals should work with data owners to develop the appropriate data retention policy for each type of data the organization owns. Examples of data types include, but are not limited to, human resources data, accounts payable/receivable data, sales data, customer data, and e-mail.

Designing a data retention policy is covered more fully in the upcoming section “Retention Standards.” Data Types Categorizing data types is a non-technical control for ensuring data privacy and protection. To properly categorize data types, a security analyst should be familiar with some of the most sensitive types of data that the organization may possess, as described in the sections that follow. Personally Identifiable Information (PII) When considering technology and its use today, privacy is a major concern of users. This privacy concern usually involves three areas: which personal information can be shared with whom, whether messages can be exchanged confidentially, and whether and how one can send messages anonymously. Privacy is an integral part of any security measures that an organization takes. As part of the security measures that organizations must take to protect privacy, PII must be understood, identified, and protected. Refer to Chapter 15 for more details about protecting PII. Personal Health Information (PHI) PHI is a particular type of PII that an organization may possess, particularly healthcare organizations. Chapter 15 also provides more details about protecting PHI. Payment Card Information Another type of PII that almost all companies possess is credit card data. Holders of this data must protect it. Many of the highest-profile security breaches that have occurred have involved the theft of this data. The Payment Card Industry Data Security Standard (PCI DSS) applies to this type of data. The handling of payment card information is covered in

Chapter 5, “Threats and Vulnerabilities Associated with Specialized Technology.” Retention Standards Retention standards are another non-technical control for ensuring data privacy and protection. Retention standards are covered in Chapter 21. Confidentiality The three fundamentals of security are confidentiality, integrity, and availability (CIA). Most security issues result in a violation of at least one facet of the CIA triad. Understanding these three security principles will help security professionals ensure that the security controls and mechanisms implemented protect at least one of these principles. To ensure confidentiality, you must prevent the disclosure of data or information to unauthorized entities. As part of confidentiality, the sensitivity level of data must be determined before any access controls are put in place. Data with a higher sensitivity level will have more access controls in place than data with a lower sensitivity level. The opposite of confidentiality is disclosure. Most security professionals consider confidentiality as it relates to data on a network or devices. However, data can also exist in printed format. Appropriate controls should be put into place to protect data on a network, but data in its printed format needs to be protected, too, which involves implementing data disposal policies. Examples of controls that improve confidentiality include encryption, steganography, access control lists (ACLs), and data classification. Legal Requirements Legal requirements are a form of non-technical controls that can mandate technical controls. In some cases, the design of

controls will be driven by legal requirements that apply to the organization based on the industry or sector in which it operates. In Chapter 15 you learned the importance of recognizing legal responsibilities during an incident response. Let’s examine some of the laws and regulations that may come into play. The United States and European Union (EU) both have established laws and regulations that affect organizations that operate within their area of governance. While security professionals should strive to understand laws and regulations, security professionals may not have the level of knowledge and background to fully interpret these laws and regulations to protect their organization. In these cases, security professionals should work with legal representation regarding legislative or regulatory compliance. Security analysts must be aware of the laws and, at a minimum, understand how the laws affect the operations of their organization. For example, a security professional working for a healthcare facility would need to understand all security guidelines in HIPAA and PPACA, described next. The following are the most significant laws that may affect an organization and its security policy:

Sarbanes-Oxley Act (SOX): Also known as the Public Company Accounting Reform and Investor Protection Act of 2002, affects any organization that is publicly traded in the United States. It controls the accounting methods and financial reporting for the organizations and stipulates penalties and even jail time for executive officers. Health Insurance Portability and Accountability Act (HIPAA): Also known as the Kennedy-Kassebaum Act, affects all healthcare facilities, health insurance companies, and healthcare clearinghouses. It is enforced by the Office of Civil Rights (OCR) of

the Department of Health and Human Services (HSS). It provides standards and procedures for storing, using, and transmitting medical information and healthcare data. HIPAA overrides state laws unless the state laws are stricter. It amends the Patient Protection and Affordable Care Act (PPACA), commonly known as Obamacare. Gramm-Leach-Bliley Act (GLBA) of 1999: Affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers. It provides guidelines for securing all financial information and prohibits sharing financial information with third parties. This act directly affects the security of PII. Computer Fraud and Abuse Act (CFAA) of 1986: Affects any entities that engage in hacking of “protected computers,” as defined in the act. It was amended in 1989, 1994, and 1996; in 2001 by the USA PATRIOT Act (listed below); in 2002; and in 2008 by the Identity Theft Enforcement and Restitution Act. A “protected computer” is a computer used exclusively by a financial institution or the U.S. government or used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States. Due to the interstate nature of most Internet communication, any ordinary computer has come under the jurisdiction of the law, including cell phones. The law includes several definitions of hacking, including knowingly accessing a computer without authorization; intentionally accessing a computer to obtain financial records, U.S. government information, or protected computer information; and transmitting fraudulent commerce communication with the intent to extort. Federal Privacy Act of 1974: Affects any computer that contains records used by a federal agency. It provides guidelines on collection, maintenance, use, and dissemination of PII about individuals that is maintained in systems of records by federal agencies on collecting, maintaining, using, and distributing PII. Federal Intelligence Surveillance Act (FISA) of 1978: Affects law enforcement and intelligence agencies. It was the first act to give procedures for the physical and electronic surveillance and collection of “foreign intelligence information” between “foreign powers” and “agents of foreign powers” and applied only to

traffic within the United States. It was amended by the USA PATRIOT Act of 2001 and the FISA Amendments Act of 2008. Electronic Communications Privacy Act (ECPA) of 1986: Affects law enforcement and intelligence agencies. It extended government restrictions on wiretaps from telephone calls to include transmissions of electronic data by computer and prohibited access to stored electronic communications. It was amended by the Communications Assistance to Law Enforcement Act (CALEA) of 1994, the USA PATRIOT Act of 2001, and the FISA Amendments Act of 2008. Computer Security Act of 1987: Superseded in 2002 by FISMA (listed below), the first law to require a formal computer security plan. It was written to protect and defend the sensitive information in the federal government systems and provide security for that information. It also placed requirements on government agencies to train employees and identify sensitive systems. United States Federal Sentencing Guidelines of 1991: Affects individuals and organizations convicted of felonies and serious (Class A) misdemeanors. It provides guidelines to prevent sentencing disparities that existed across the United States. Communications Assistance for Law Enforcement Act (CALEA) of 1994: Affects law enforcement and intelligence agencies. It requires telecommunications carriers and manufacturers of telecommunications equipment to modify and design their equipment, facilities, and services to ensure that they have built-in surveillance capabilities. This allows federal agencies to monitor all telephone, broadband Internet, and voice over IP (VoIP) traffic in real time. Personal Information Protection and Electronic Documents Act (PIPEDA): Affects how private-sector organizations collect, use, and disclose personal information in the course of commercial business in Canada. The act was written to address EU concerns about the security of PII in Canada. The law requires organizations to obtain consent when they collect, use, or disclose personal information and to have personal information policies that are clear, understandable, and readily available. Basel II: Affects financial institutions. It addresses minimum capital requirements, supervisory review, and market discipline. Its main purpose is to protect against risks that banks and other financial institutions face.

Federal Information Security Management Act (FISMA) of 2002: Affects every federal agency. It requires federal agencies to develop, document, and implement an agencywide information security program. Economic Espionage Act of 1996: Affects companies that have trade secrets and any individuals who plan to use encryption technology for criminal activities. This act covers a multitude of issues because of the way it was structured. A trade secret does not need to be tangible to be protected by this act. Per this law, theft of a trade secret is now a federal crime, and the United States Sentencing Commission must provide specific information in its reports regarding encryption or scrambling technology that is used illegally. USA PATRIOT Act of 2001: Formally known as Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism, it affects law enforcement and intelligence agencies in the United States. Its purpose is to enhance the investigatory tools that law enforcement can use, including email communications, telephone records, Internet communications, medical records, and financial records. When this law was enacted, it amended several other laws, including FISA and the ECPA of 1986. The USA PATRIOT Act does not restrict private citizens’ use of investigatory tools, although there are some exceptions—for example, if the private citizen is acting as a government agent (even if not formally employed), if the private citizen conducts a search that would require law enforcement to have a warrant, if the government is aware of the private citizen’s search, or if the private citizen is performing a search to help the government. Health Care and Education Reconciliation Act of 2010: Affects healthcare and educational organizations. This act increased some of the security measures that must be taken to protect healthcare information. Employee Privacy Issues and Expectation of Privacy: Employee privacy issues must be addressed by all organizations to ensure that the organizations are protected from costly legal penalties that result from data breaches. However, organizations must give employees the proper notice of any monitoring that might be used. Organizations must also ensure that the monitoring of employees is applied in a consistent manner. Many organizations implement a no-expectation-of-privacy policy that the employee

must sign after receiving the appropriate training. This policy should specifically describe any unacceptable behavior. Companies should also keep in mind that some actions are protected by the Fourth Amendment. Security professionals and senior management should consult with legal counsel when designing and implementing any monitoring solution. European Union: The EU has implemented several laws and regulations that affect security and privacy. The EU Principles on Privacy include strict laws to protect private data. The EU’s Data Protection Directive provides direction on how to follow the laws set forth in the principles. The EU created the Safe Harbor Privacy Principles to help guide U.S. organizations in compliance with the EU Principles on Privacy. The following are some of the guidelines as updated by the General Data Protection Regulation (GDPR). Personal data may not be processed unless there is at least one legal basis to do so. Article 6 states the lawful purposes are If the data subject has given consent to the processing of his or her personal data To fulfill contractual obligations with a data subject, or for tasks at the request of a data subject who is in the process of entering into a contract To comply with a data controller’s legal obligations To protect the vital interests of a data subject or another individual To perform a task in the public interest or in official authority For the legitimate interests of a data controller or a third party, unless these interests are overridden by interests of the data subject or her or his rights according to the Charter of Fundamental Rights (especially in the case of children)

Note Do not confuse the terms safe harbor and data haven. According to the EU, a safe harbor is an entity that conforms to all the requirements of the EU Principles on Privacy. A data haven is a country that fails to legally protect personal data, with the main aim being to attract companies engaged in the collection of the data.

The EU Electronic Security Directive defines electronic signature principles. In this directive, a signature must be uniquely linked to the signer and to the data to which it relates so that any subsequent data change is detectable. The signature must be capable of identifying the signer.

Data Sovereignty Data sovereignty is the concept that data stored in digital format is subject to the laws of the country in which the data is located. Affecting this concept are the differing privacy laws and regulations issued by nations and governing bodies. This concept is further complicated by the deploying of cloud solutions. Many countries have adopted legislation that requires customer data to be kept within the country in which the customer resides. But organizations are finding it increasingly difficult to ensure that this is the case when working with service providers and other third parties. Organizations should consult with the service-level agreements (SLAs) with these providers to verify compliance. Keep in mind, however, that the laws of multiple countries may affect the data. For instance, suppose an organization in the United States is using a data center in the United States but the data center is operated by a company from France. The data would then be subject to both U.S. and EU laws and regulations. Another factor would be the type of data being stored, as different types of data are regulated differently. Healthcare data and consumer data have vastly separate laws that regulate the transportation and storage of data. Security professionals should answer the following questions:

Where is the data stored? Who has access to the data? Where is the data backed up? How is the data encrypted?

The answers to these four questions will help security professionals design a governance strategy for their organization that will aid in addressing any data sovereignty concerns. Remember that the responsibility to meet data regulations falls on both the organization that owns the data and the vendor providing the data storage service, if any. Data Minimization Organizations should minimize the amount of personal data they store to what is necessary. An important principle in the European Union’s General Data Protection Regulation (GDPR) is data minimization. Data processing should only use as much data as is required to successfully accomplish a given task. By reducing the amount of personal data, the attack surface is also reduced. Purpose Limitation Another key principle in the European Union’s GDPR that is finding wide adoption is that of purpose limitation. Personal data collected for one purpose cannot be repurposed without further consent from the individual. For example data collected to track a disease outbreak cannot be used to identify individuals. Non-disclosure agreement (NDA) In Chapter 15 you learned about various types of intellectual property, such as patents, copyrights, and trade secrets. Most organizations that have trade secrets attempt to protect them by using NDAs. An NDA must be signed by any entity that has

access to information that is part of a trade secret. Anyone who signs an NDA will suffer legal consequences if the organization is able to prove that the signer violated it.

TECHNICAL CONTROLS Technical controls are implemented with technology and include items such as firewalls, access lists (ACLs), permissions on files and folders, and devices that identify and prevent threats. After it understands the threats, an organization needs to establish likelihoods and impacts, and it needs to select controls that, while addressing a threat, do not cost more than the cost of the realized threat. The review of these controls should be an ongoing process. Encryption In Chapter 8, “Security Solutions for Infrastructure Management,” you learned about encryption and cryptography. These technologies comprise a technical control that can be used to provide the confidentiality objective of the CIA triad. Information assets can be protected from being accessed by unauthorized parties by encrypting data at rest (while stored) and data in transit (when crossing a network). As you also learned, cryptography in the form of hashing algorithms can also provide a way to asses data integrity. Data Loss Prevention (DLP) Chapter 12, “Implementing Configuration Changes to Existing Controls to Improve Security,” described data loss prevention (DLP) systems. As you learned, DLP systems are used to prevent data exfiltration, which is the intentional or unintentional loss of sensitive data from the network. DLP comprises a strong technical control that protects both integrity and confidentiality. Data Masking

Data masking means altering data from its original state to protect it. You already learned about two forms of masking, encryption (storing the data in an encrypted form) and hashing (storing a hash value, generated from the data by a hashing algorithm, rather than the data itself). Many passwords are stored as hash values. The following are some other methods of data masking:

Using substitution tables and aliases for the data Redacting or replacing the sensitive data with a random value Averaging or taking individual values and averaging them (adding them and then dividing by the number of individual values) or aggregating them (totaling them and using only the total value)

Deidentification Data deidentification or data anonymization is the process of deleting or masking personal identifiers, such as personal names, from a set of data. Deidentification is often done when the data is being used in the aggregate, such as when medical data is used for research. It is a technical control that is used as one of the main approaches to ensuring data privacy protection. Tokenization Tokenization is another form of data hiding or masking in that it replaces a value with a token that is used instead of the actual value. For example, tokenization is a new emerging standard for mobile transactions; numeric tokens are used to protect cardholders’ sensitive credit and debit card information. This is a great security feature that substitutes the primary account number with a numeric token that can be processed by all participants in the payment ecosystem. Figure 19-1 shows the use of tokens in a credit card transaction using a smartphone.

Digital Rights Management (DRM) Hardware manufacturers, publishers, copyright holders, and individuals use digital rights management (DRM) to control the use of digital content. DRM often also involves device controls. First-generation DRM software controls copying. Second-generation DRM software controls executing, viewing, copying, printing, and altering works or devices. The U.S. Digital Millennium Copyright Act (DMCA) of 1998 imposes criminal penalties on those who make available technologies whose primary purpose is to circumvent content protection technologies. DRM includes restrictive license agreements and encryption. DRM protects computer games and other software, documents, e-books, films, music, and television.

FIGURE 19-1 Tokenization In most enterprise implementations, the primary concern is the DRM control of documents by using open, edit, print, or copy access restrictions that are granted on a permanent or temporary basis. Solutions can be deployed that store the protected data in a central or decentralized model. Encryption is used in the DRM implementation to protect the data both at rest and in transit.

Today’s DRM implementations include the following:

Directories: Lightweight Directory Access Protocol (LDAP) Active Directory (AD) Custom Permissions: Open Print Modify Clipboard Additional controls: Expiration (absolute, relative, immediate revocation) Version control Change policy on existing documents Watermarking Online/offline Auditing Ad hoc and structured processes: User initiated on desktop Mapped to system Built into workflow process

Document DRM

Organizations implement DRM to protect confidential or sensitive documents and data. Commercial DRM products allow organizations to protect documents and include the capability to restrict and audit access to documents. Some of the permissions that can be restricted using DRM products include reading and modifying a file, removing and adding watermarks, downloading and saving a file, printing a file, or even taking screenshots. If a DRM product is implemented, the organization should ensure that the administrator is properly trained and that policies are in place to ensure that rights are appropriately granted and revoked. Music DRM DRM has been used in the music industry for some time now. Subscription-based music services, such as Napster, use DRM to revoke a user’s access to downloaded music once their subscription expires. While technology companies have petitioned the music industry to allow them to sell music without DRM, the industry has been reluctant to do so. Movie DRM While the movie industry has used a variety of DRM schemes over the years, two main technologies are used for the mass distribution of media:

Content Scrambling System (CSS): Uses encryption to enforce playback and region restrictions on DVDs. This system can be broken using Linux’s DeCSS tool. Advanced Access Content System (AACS): Protects Blu-ray and HD DVD content. Hackers have been able to obtain the encryption keys to this system.

This industry continues to make advances to prevent hackers from creating unencrypted copies of copyrighted material.

Video Game DRM Most video game DRM implementations rely on proprietary consoles that use Internet connections to verify video game licenses. Most consoles today verify the license upon installation and allow unrestricted use from that point. However, to obtain updates, the license will again be verified prior to download and installation of the update. E-Book DRM E-book DRM is considered to be the most successful DRM deployment. Both Amazon’s Kindle and Barnes and Nobles’ Nook devices implement DRM to protect electronic forms of books. Both of these companies have released mobile apps that function like the physical e-book devices. Today’s implementation uses a decryption key that is installed on the device. This means that the e-books cannot be easily copied between e-book devices or applications. Adobe created the Adobe Digital Experience Protection Technology (ADEPT) that is used by most e-book readers except Amazon’s Kindle. With ADEPT, AES is used to encrypt the media content, and RSA encrypts the AES key. Watermarking Digital watermarking is another method used to deter unauthorized use of a document. Digital watermarking involves embedding a logo or trademark in documents, pictures, or other objects. The watermark deters people from using the materials in an unauthorized manner. Geographic Access Requirements While a discussion of geographic issues were included in Chapter 9, authentication systems can make use of geofencing.

Geofencing is the application of geographic limits to where a device can be used. It depends on the use of Global Positioning System (GPS) or radio frequency identification (RFID) technology to create a virtual geographic boundary. Access Controls Chapter 8 covered identity and access management systems in depth. Along with encryption, access controls are the main security controls implemented to ensure confidentiality. In Chapter 21, you will learn how access controls fit into the set of controls used to maintain security.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 19-2 lists a reference of these key topics and the page numbers on which each is found.

Table 19-2 Key Topics in Chapter 19

Key Topic Element Bulleted list

Description

Significant data privacy and protection legislation

Page Number 511

Section

Description of data sovereignty

514

Bulleted list

Methods of data masking

517

Figure 19-1

Tokenization

518

Bulleted list

DRM implementations

519

Bulleted list

DRM schemes

520

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: privacy Sarbanes-Oxley Act (SOX) Health Insurance Portability and Accountability Act (HIPAA) Gramm-Leach-Bliley Act (GLBA) of 1999 Computer Fraud and Abuse Act (CFAA) Federal Privacy Act of 1974 Federal Intelligence Surveillance Act (FISA) of 1978 Electronic Communications Privacy Act (ECPA) of 1986 Computer Security Act of 1987 United States Federal Sentencing Guidelines of 1991 Personal Information Protection and Electronic Documents Act (PIPEDA) Basel II Federal Information Security Management Act (FISMA) of 2002 Economic Espionage Act of 1996 USA PATRIOT Act of 2001 Health Care and Education Reconciliation Act of 2010

employee privacy issues and expectation of privacy data sovereignty data masking deidentification tokenization digital rights management (DRM) U.S. Digital Millennium Copyright Act (DMCA) of 1998 Content Scrambling System (CSS) Advanced Access Content System (AACS) digital watermarking geofencing

REVIEW QUESTIONS 1. Data should be classified based on its ________ to the organization. 2. List at least two considerations when assigning a level of criticality. 3. Match the following terms with their definitions.

Terms

Definitions

Sensit ivity

A measure of the importance of the data

Critic ality

The application of geographic limits to where a device can be used

Geofe ncing

The concept that data stored in digital format is subject to the laws of the country in which the data is located

Data sover eignty

A measure of how freely data can be handled

4. A ________________ policy outlines how various data types must be retained and may rely on the data classifications described in the data classification policy. 5. According to the GPDR, personal data may not be processed unless there is at least one legal basis to do so. List at least two of these legal bases. 6. Match the following terms with their definitions.

Terms

Definitions

Tokeni zation

Protects Blu-ray and HD DVD content, though hackers have been able to obtain the encryption keys to this system

Digital waterm arking

Affects any organizations that handle cardholder information for the major credit card companies

AACS

Involves embedding a logo or trademark in documents, pictures, or other objects

PCI DSS

Another form of data hiding or masking in that it replaces a value with a token that is used instead of the actual value

7. _________________ means altering data from its original state to protect it. 8. List at least one method of data masking. 9. Match the following terms with their definitions.

Ter ms H I

Definitions

Affects any organization that is publicly traded in the United States

P A A S O X

Affects any entities that might engage in hacking of “protected computers,” as defined in the act

G L B A

Affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers

C F A A

Legislation affecting healthcare facilities

10. _________________ is the application of geographic limits to where a device can be used.

Chapter 20

Applying Security Concepts in Support of Organizational Risk Mitigation This chapter covers the following topics related to Objective 5.2 (Given a scenario, apply security concepts in support of organizational risk mitigation) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Business impact analysis: Describes how to assess the level of criticality of business functions to the overall organization. Risk identification process: Includes classification, ownership, retention, data types, retention standards, and confidentiality. Risk calculation: Covers probability and magnitude. Communication of risk factors: Discusses the process of sharing with critical parties. Risk prioritization: Includes security controls and engineering tradeoffs. System assessment: Describes the process of system assessment. Documented compensating controls: Covers the use of additional controls. Training and exercises: Includes red team, blue team, white team, and tabletop exercise. Supply chain assessment: Covers vendor due diligence and hardware source authenticity.

The risk management process is a formal method of evaluating vulnerabilities. A robust risk management process will identify vulnerabilities that need to be addressed and will generate an assessment of the impact and likelihood of an attack that takes advantage of the vulnerability. The process also includes a formal assessment of possible risk mitigations. This chapter explores the types of risk management processes and how they are used to mitigate risk.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these nine self-assessment questions, you might want to skip ahead to the “Exam Preparation Tasks” section. Table 20-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so that you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 20-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Question

Business Impact Analysis

1

Risk Identification Process

2

Risk Calculation

3

Communication of Risk Factors

4

Risk Prioritization

5

Systems Assessment

6

Documented Compensating Controls

7

Training and Exercises

8

Supply Chain Assessment

9

1. Which of the following is the first step in the BIA? 1. Identify resource requirements. 2. Identify outage impacts and estimate downtime. 3. Identify critical processes and resources. 4. Identify recovery priorities.

2. Which of the following is not a goal of risk assessment? 1. Identify vulnerabilities and threats. 2. Identify key stakeholders. 3. Identify assets and asset value. 4. Calculate threat probability and business impact.

3. Which of the following is the monetary impact of each threat occurrence? 1. ALE 2. SLE 3. AV 4. EF

4. The non-technical leadership audience needs which of the following to be stressed in the communication of risk factors to stakeholders?

1. The technical risks 2. Security operations difficulties 3. The cost of cybersecurity expenditures 4. Translation of technical risk into common business terms

5. Which of the following processes involves terminating the activity that causes a risk or choosing an alternative that is not as risky? 1. Risk avoidance 2. Risk transfer 3. Risk mitigation 4. Risk acceptance

6. Which of the following occurs when the adequacy of a system’s overall security is accepted by management? 1. Certification 2. Accreditation 3. Acceptance 4. Due diligence

7. To implement ISO/IEC 27001:2013, the project manager should complete which step first? 1. Identify the requirements 2. Obtain management support 3. Perform risk assessment and risk treatment 4. Define the ISMS scope, information security policy, and information security objectives

8. Which of the following are in place to substitute for a primary access control and mainly act to mitigate risks? 1. Compensating controls

2. Secondary controls 3. Accommodating controls 4. Directive controls

9. Which team acts as the attacking force? 1. Green 2. Red 3. Blue 4. White

FOUNDATION TOPICS BUSINESS IMPACT ANALYSIS A business impact analysis (BIA) is a functional analysis that occurs as part of business continuity and planning for disaster recovery. Performing a thorough BIA will help business units understand the impact of a disaster. The resulting document that is produced from a BIA lists the critical and necessary business functions, their resource dependencies, and their level of criticality to the overall organization. The BIA helps the organization to understand what impact a disruptive event would have on the organization. It is a management-level analysis that identifies the impact of losing an organization’s resources.

The four main steps of the BIA are as follows: 1. Identify critical processes and resources. 2. Identify outage impacts and estimate downtime. 3. Identify resource requirements. 4. Identify recovery priorities.

The BIA relies heavily on any vulnerability analysis and risk assessment that is completed. The vulnerability analysis and risk assessment may be performed by the Business Continuity Planning (BCP) committee or by a separately appointed risk assessment team. Identify Critical Processes and Resources When identifying the critical processes and resources of an organization, the BCP committee must first identify all the business units or functional areas within the organization. After all units have been identified, the BCP team should select which individuals will be responsible for gathering all the needed data and select how to obtain the data. These individuals will gather the data using a variety of techniques, including questionnaires, interviews, and surveys. They might also actually perform a vulnerability analysis and risk assessment or use the results of these tests as input for the BIA. During the data gathering, the organization’s business processes and functions and the resources upon which these processes and functions depend should be documented. This list should include all business assets, including physical and financial assets that are owned by the organization, and any assets that provide competitive advantage or credibility. After determining all the business processes, functions, and resources, the organization should then determine the criticality level of each process, function, and resource. This is done by analyzing the impact that the loss of each resource would impose on the capability to continue to do business. Identify Outage Impacts and Estimate Downtime Analyzing the impact that the loss of each resource would impose on the ability to continue to do business will provide the raw material to generate metrics used to determine the extent to which redundancy must be provided to each resource. You

learned about metrics such as MTD, MTTR, and RTO that are used to assess downtime and recovery time in Chapter 16, “Applying the Appropriate Incident Response Procedure.” Please review those concepts. Identify Resource Requirements After the criticality level of each process, function, and resource is determined, you need to determine all the resource requirements for each process, function, and resource. For example, an organization’s accounting system might rely on a server that stores the accounting application, another server that holds the database, various client systems that perform the accounting tasks over the network, and the network devices and infrastructure that support the system. Resource requirements should also consider any human resources requirements. When human resources are unavailable, the organization can be just as negatively impacted as when technological resources are unavailable. Note Keep in mind that the priority for any CySA professional should be the safety of human life. Consider and protect all other organizational resources only after personnel are safe.

The organization must document the resource requirements for every resource that would need to be restored when the disruptive event occurs. This includes device name, operating system or platform version, hardware requirements, and device interrelationships. Identify Recovery Priorities After all the resource requirements have been identified, the organization must identify the recovery priorities. Establish recovery priorities by taking into consideration process criticality, outage impacts, tolerable downtime, and system

resources. After all this information is compiled, the result is an information system recovery priority hierarchy. Three main levels of recovery priorities should be used: high, medium, and low. The BIA stipulates the recovery priorities but does not provide the recovery solutions. Those are given in the disaster recovery plan (DRP). Recoverability Recoverability is the ability of a function or system to be recovered in the event of a disaster or disruptive event. As part of recoverability, downtime must be minimized. Recoverability places emphasis on the personnel and resources used for recovery. Fault Tolerance Fault tolerance is provided when a backup component begins operation when the primary component fails. One of the key aspects of fault tolerance is the lack of service interruption. Varying levels of fault tolerance can be achieved at most levels of the organization based on how much an organization is willing to spend. However, the backup component often does not provide the same level of service as the primary component. For example, an organization might implement a high-speed OC1 connection to the Internet. However, the backup connection to the Internet that is used in the event of the failure of the OC1 line might be much slower but at a much lower cost of implementation than the primary OC1 connection.

RISK IDENTIFICATION PROCESS A risk assessment is a tool used in risk management to identify vulnerabilities and threats, assess the impact of those vulnerabilities and threats, and determine which controls to implement. This is also called risk identification. Risk assessment (or analysis) has four main goals:

Identify assets and asset value. Identify vulnerabilities and threats. Calculate threat probability and business impact. Balance threat impact with countermeasure cost.

Prior to starting a risk assessment, management and the risk assessment team must determine which assets and threats to consider. This process determines the size of the project. The risk assessment team must then provide a report to management on the value of the assets considered. Management can then review and finalize the asset list, adding and removing assets as it sees fit, and then determine the budget of the risk assessment project. Let’s look at a specific scenario to help understand the importance of system-specific risk analysis. In our scenario, the Sales division decides to implement touchscreen technology and tablet computers to increase productivity. As part of this new effort, a new sales application will be developed that works with the new technology. At the beginning of the deployment, the chief security officer (CSO) attempts to prevent the deployment because the technology is not supported in the enterprise. Upper management decides to allow the deployment. The CSO should work with the Sales division and other areas involved so that the risk associated with the full life cycle of the new deployment can be fully documented and appropriate controls and strategies can be implemented during deployment. Risk assessment should be carried out before any mergers and acquisitions occur or new technology and applications are deployed. If a risk assessment is not supported and directed by senior management, it will not be successful. Management must

define the purpose and scope of a risk assessment and allocate the personnel, time, and monetary resources for the project. There are several approaches to performing a risk assessment, covered in the following sections. Make Risk Determination Based upon Known Metrics To make a risk determination, an organization must perform a formal risk analysis. A formal risk analysis often asks questions such as these: What corporate assets need to be protected? What are the business needs of the organization? What outside threats are most likely to compromise network security? Different types of risk analysis, including qualitative risk analysis and quantitative risk analysis, should be used to ensure that the data obtained is maximized. Qualitative Risk Analysis A qualitative risk analysis does not assign monetary and numeric values to all facets of the risk analysis process. Qualitative risk analysis techniques include intuition, experience, and best practice techniques, such as brainstorming, focus groups, surveys, questionnaires, meetings, interviews, and Delphi. The Delphi technique is a method used to estimate the likelihood and outcome of future events. Although all these techniques can be used, most organizations will determine the best technique(s) based on the threats to be assessed. Conducting a qualitative risk analysis requires a risk assessment team that has experience and education related to assessing threats. Each member of the group who has been chosen to participate in the qualitative risk analysis uses his or her experience to rank the likelihood of each threat and the damage that might result. After each group member ranks the threat possibility, loss

potential, and safeguard advantage, data is combined in a report to present to management. Two advantages of qualitative over quantitative risk analysis (discussed next) are that qualitative prioritizes the risks and identifies areas for immediate improvement in addressing the threats. Disadvantages of qualitative risk analysis are that all results are subjective and a dollar value is not provided for cost/benefit analysis or for budget help. Note When performing risk analyses, all organizations experience issues with any estimate they obtain. This lack of confidence in an estimate is referred to as uncertainty and is expressed as a percentage. Any reports regarding a risk assessment should include the uncertainty level.

Quantitative Risk Analysis A quantitative risk analysis assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, and safeguard costs. Equations are used to determine total and residual risks. An advantage of quantitative over qualitative risk analysis is that quantitative uses less guesswork than qualitative. Disadvantages of quantitative risk analysis include the difficulty of the equations, the time and effort needed to complete the analysis, and the level of data that must be gathered for the analysis. Most risk analysis includes some hybrid of both quantitative and qualitative risk analyses. Most organizations favor using quantitative risk analysis for tangible assets and qualitative risk analysis for intangible assets. Keep in mind that even though quantitative risk analysis uses numeric values, a purely quantitative analysis cannot be achieved because some level of subjectivity is always part of the data. This type of estimate

should be based on historical data, industry experience, and expert opinion.

RISK CALCULATION A quantitative risk analysis assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, safeguard costs, and so on. Equations are used to determine total and residual risks. The most common equations are for single loss expectancy and annual loss expectancy. The single loss expectancy (SLE) is the monetary impact of each threat occurrence. To determine the SLE, you must know the asset value (AV) and the exposure factor (EF), which is the percentage value or functionality of an asset that will be lost when a threat event occurs. The calculation for obtaining the SLE is as follows: SLE = AV × EF For example, an organization has a web server farm with an AV of $20,000. If the risk assessment has determined that a power failure is a threat agent for the web server farm and the EF for a power failure is 25%, the SLE for this event equals $5000. The annual loss expectancy (ALE) is the expected risk factor of an annual threat event. To determine the ALE, you must know the SLE and the annualized rate of occurrence (ARO), which is the estimate of how often a given threat might occur annually. The calculation for obtaining the ALE is as follows: ALE = SLE × ARO Using the previously mentioned example, if the risk assessment has determined that the ARO for the power failure of the web

server farm is 50%, the ALE for this event equals $2500. Security professionals should keep in mind that this calculation can be adjusted for geographic distances. Using the ALE, the organization can decide whether to implement controls or not. If the annual cost of the control to protect the web server farm is more than the ALE, the organization could easily choose to accept the risk by not implementing the control. If the annual cost of the control to protect the web server farm is less than the ALE, the organization should consider implementing the control. As previously mentioned, even though quantitative risk analysis uses numeric value, a purely quantitative analysis cannot be achieved because some level of subjectivity is always part of the data. In the previous example, how does the organization know that damage from the power failure will be 25% of the asset? This type of estimate should be based on historical data, industry experience, and expert opinion. Probability Both qualitative and quantitative risk analysis processes take into consideration the probability that an event will occur. In quantitative risk analysis, this consideration is made using the ARO value for each event. In qualitative risk assessment, each possible event is assigned a probability value by subject matter experts. Magnitude Both qualitative and quantitative risk analysis processes take into consideration the magnitude of an event that might occur. In quantitative risk analysis, this consideration is made using the SLE and ALE values for each event. In qualitative risk assessment, each possible event is assigned an impact (magnitude) value by subject matter experts.

COMMUNICATION OF RISK FACTORS Technical cybersecurity risks represent a threat that is largely misunderstood by non-technical personnel. Security professionals must bridge the knowledge gap in a manner that the stakeholders understand. To properly communicate technical risks, security professionals must first understand their audience and then be able to translate those risks into business terms that the audience understands. The audience that needs to understand the technical risks includes semi-technical audiences, non-technical leadership, the board of directors and executives, and regulators. The semitechnical audience understands the security operations difficulties and often consists of powerful allies. Typically, this audience needs a data-driven, high-level message based on verifiable facts and trends. The non-technical leadership audience needs the message to be put in context with their responsibilities. This audience needs the cost of cybersecurity expenditures to be tied to business performance. Security professionals should present metrics that show how cyber risk is trending, without using popular jargon. The board of directors and executives are primarily concerned with business risk management and managing return on assets. The message to this group should translate technical risk into common business terms and present metrics about cybersecurity risk and performance. Finally, when communicating with regulators, it is important to be thorough and transparent. In addition, organizations may want to engage a third party to do a gap assessment before an audit. This helps security professionals find and remediate weaknesses prior to the audit and enables the third party to speak on behalf of the security program.

To frame the technical risks into business terms for these audiences, security professionals should focus on business disruption, regulatory issues, and bad press. If a company’s database is attacked and, as a result, the website cannot sell products to customers, this is a significant disruption of business operations. If an incident occurs that results in a regulatory investigation and fines, a regulatory issue has arisen. Bad press can result in lost sales and costs to repair the organization’s image. Security professionals must understand the risk metrics and what each metric costs the organization. Although security professionals may not definitively know the return on investment (ROI), they should take the security incident frequency at the organization and assign costs in terms of risk exposure for every risk. It is also helpful to match the risks with the assets protected to make sure the organization’s investment is protecting the most valuable assets. Moreover, security professionals alone cannot best determine the confidentiality, integrity, and availability (CIA) levels for enterprise information assets. Security professionals should consult with the asset stakeholders to gain their input on which level should be assigned to each tenet for an information asset. Keep in mind, however, that all stakeholders should be consulted. For example, while department heads should be consulted and have the biggest influence on the CIA decisions about departmental assets, other stakeholders within the department and organization should be consulted as well. This rule holds for any security project that an enterprise undertakes. Stakeholder input should be critical at the start of the project to ensure that stakeholder needs are documented and to gain stakeholder project buy-in. Later, if problems arise with the security project and changes must be made, the project team should discuss the potential changes with the project

stakeholders before any project changes are approved or implemented.

RISK PRIORITIZATION As previously discussed, by using either quantitative or qualitative analysis, you can arrive at a priority list that indicates which issues need to be treated sooner rather than later and which can wait. In qualitative analysis, one method used is called a risk assessment matrix. When a qualitative assessment is conducted, the risks are placed into the following categories: High Medium Low

Then, a risk assessment matrix, such as the one in Figure 20-1, is created. Subject experts grade all risks based on their likelihood and impact. This helps prioritize the application of resources to the most critical vulnerabilities.

Figure 20-1 Risk Assessment Matrix

Security Controls Chapter 21, “The Importance of Frameworks, Policies, Procedures, and Controls,” delves deeply into the types of controls that can be implemented to address security issues. The selection of controls that are both cost effective and capable of addressing the issue depends in large part of how an organizations chooses to address or handle risk. The following four basic methods are used to handle risk:

Risk avoidance: Terminating the activity that causes a risk or choosing an alternative that is not as risky Risk transfer: Passing on the risk to a third party, such as an insurance company Risk mitigation: Defining the acceptable risk level the organization can tolerate and reducing the risk to that level Risk acceptance: Understanding and accepting the level of risk as well as the cost of damages that can occur

Engineering Tradeoffs In some cases, there may be issues that make implementing a particular solution inadvisable or impossible. Engineering tradeoffs are inhibitors to remediation and are covered in the following sections. MOUs A memorandum of understanding (MOU) is a document that, while not legally binding, indicates a general agreement between the principals to do something together. An organization may have MOUs with multiple organizations, and MOUs may in some instances contain security requirements that inhibit or prevent the deployment of certain measures. SLAs

A service-level agreement (SLA) is a document that specifies a service to be provided by a party, the costs of the service, and the expectations of performance. These contracts may exist with third parties from outside the organization and between departments within an organization. Sometimes these SLAs may include specifications that inhibit or prevent the deployment of certain measures. Organizational Governance Organizational governance refers to the process of controlling an organization’s activities, processes, and operations. When the process is unwieldy, as it is in some very large organizations, the application of countermeasures may be frustratingly slow. One of the reasons for including upper management in the entire process is to use the weight of authority to cut through the red tape. Business Process Interruption The deployment of mitigations cannot be done in such a way that business operations and processes are interrupted. Therefore, the need to conduct these activities during off hours can also be a factor that impedes the remediation of vulnerabilities. Degrading Functionality Finally, some solutions create more issues than they resolve. In some cases, it may be impossible to implement mitigation due to the fact that it breaks mission-critical applications or processes. The organization may need to research an alternative solution.

SYSTEMS ASSESSMENT Systems assessment comprises a process whereby systems are fully vetted for potential issues from both a functionality

standpoint and a security standpoint. These assessments (discussed more fully in Chapter 21) can lead to two types of organizational approvals: accreditation and certification. Although the terms are used as synonyms in casual conversation, accreditation and certification are two different concepts in the context of assurance levels and ratings. However, they are closely related. Certification evaluates the technical system components, whereas accreditation occurs when the adequacy of a system’s overall security is accepted by management. ISO/IEC 27001 ISO/IEC 27001:2013 is the current version of the 27001 standard, and it is one of the most popular standards by which organizations obtain certification for information security. It provides guidance on ensuring that an organization’s information security management system (ISMS) is properly established, implemented, maintained, and continually improved. It includes the following components: ISMS scope Information security policy Risk assessment process and its results Risk treatment process and its decisions Information security objectives Information security personnel competence Necessary ISMS-related documents Operational planning and control document Information security monitoring and measurement evidence ISMS internal audit program and its results Top management ISMS review evidence Evidence of identified nonconformities and corrective actions

When an organization decides to obtain ISO/IEC 27001 certification, a project manager should be selected to ensure that all the components are properly completed. To implement ISO/IEC 27001:2013, the project manager should complete the following steps:

Step 1. Obtain management support. Step 2. Determine whether to use consultants or to complete the work in-house, purchase the 27001 standard, write the project plan, define the stakeholders, and organize the project kickoff. Step 3. Identify the requirements. Step 4. Define the ISMS scope, information security policy, and information security objectives. Step 5. Develop document control, internal audit, and corrective action procedures. Step 6. Perform risk assessment and risk treatment. Step 7. Develop a statement of applicability and a risk treatment plan and accept all residual risks. Step 8. Implement the controls defined in the risk treatment plan and maintain the implementation records. Step 9. Develop and implement security training and awareness programs. Step 10. Implement the ISMS, maintain policies and procedures, and perform corrective actions. Step 11. Maintain and monitor the ISMS. Step 12. Perform an internal audit and write an audit report.

Step 13. Perform management review and maintain management review records. Step 14. Select a certification body and complete certification. Step 15. Maintain records for surveillance visits. For more information, visit https://www.iso.org/standard/54534.html. ISO/IEC 27002 ISO/IEC 27002:2013 is the current version of the 27002 standard, and it provides a code of practice for information security management. It includes the following 14 content areas: Information security policy Organization of information security Human resources security Asset management Access control Cryptography Physical and environmental security Operations security Communications security Information systems acquisition, development, and maintenance Supplier relationships Information security incident management Information security aspects of business continuity Compliance

For more information, visit https://www.iso.org/standard/54533.html.

DOCUMENTED COMPENSATING CONTROLS As pointed out in the section “Engineering Tradeoffs” earlier in this chapter, in some cases, there may be issues that make implementing a particular solution inadvisable or impossible. Not all weaknesses can be eliminated. In some cases, they can only be mitigated. This can be done by implementing controls that compensate for a weakness that cannot be completely eliminated. A compensating control reduces the potential risk. Compensating controls are also referred to as countermeasures and safeguards. Three things must be considered when implementing a compensating control: vulnerability, threat, and risk. For example, a good countermeasure might be to implement the appropriate ACL and encrypt the data. The ACL protects the integrity of the data, and the encryption protects the confidentiality of the data. Compensating controls are put in place to substitute for a primary access control and mainly act to mitigate risks. By using compensating controls, you can reduce risk to a more manageable level. Examples of compensating controls include requiring two authorized signatures to release sensitive or confidential information and requiring two keys owned by different personnel to open a safety deposit box. These compensating controls must be recorded along with the reason the primary control was not implemented. Compensating controls are covered further in Chapter 21.

TRAINING AND EXERCISES Security analysts must practice responding to security events in order to react to them in the most organized and efficient manner. There are some well-established ways to approach this. This section looks at how teams of analysts, both employees and third-party contractors, can be organized and some wellestablished names for these teams. Security posture is typically

assessed by war game exercises in which one group attacks the network while another attempts to defend the network. These games typically have some implementation of the following teams.

Red Team The red team acts as the attacking force. It typically carries out penetration tests by following a well-established process of gathering information about the network, scanning the network for vulnerabilities, and then attempting to take advantage of the vulnerabilities. The actions they can take are established ahead of time in the rules of engagement. Often these individuals are third-party contractors with no prior knowledge of the network. This helps them simulate attacks that are not inside jobs. Blue Team The blue team acts as the network defense team, and the attempted attack by the red team tests the blue team’s ability to respond to the attack. It also serves as practice for a real attack. This includes accessing log data, using a SIEM, garnering intelligence information, and performing traffic and data flow analysis. White Team The white team is a group of technicians who referee the encounter between the red team and the blue team. Enforcing the rules of engagement might be one of the white team’s roles, along with monitoring the responses to the attack by the blue team and making note of specific approaches employed by the red team. Tabletop Exercise

Conducting a tabletop exercise is the most cost-effective and efficient way to identify areas of vulnerability before moving on to higher-level testing. A tabletop exercise is an informal brainstorming session that encourages participation from business leaders and other key employees. In a tabletop exercise, the participants agree to determine a particular attack scenario upon which they then focus.

SUPPLY CHAIN ASSESSMENT Organizational risk mitigation requires assessing the safety and the integrity of the hardware and software before the organization purchases it. The following are some of the methods used to assess the supply chain through which a hardware or software product flows to ensure that the product does not pose a security risk to the organization. Vendor Due Diligence When performing due diligence with regard to a vendor, it means that we are assessing the vendor with regard to the vendor’s products and services. While surely we are concerned with the functionality and value of the products, we are even more concerned about the innate security of such products. Stories about counterfeit gear that contains backdoors have circulated for years and are not unfounded. Online resources for conducting due diligence about vendors include https://complyadvantage.com/knowledgebase/vendor-duediligence/#:~:text=The%20vendor%20%28target%20company %29%20engages%20a%20third%20party,the%20commenceme nt%20of%20the%20sale%20or%20partnership%20arrangeme nt. OEM Documentation One of the ways you can reduce the likelihood of purchasing counterfeit equipment is to insist on the inclusion of verifiable original equipment manufacturer (OEM) documentation. In

many cases, this paperwork includes anti-counterfeiting features. Make sure to use the vendor website to verify all the various identifying numbers in the documentation. Hardware Source Authenticity When purchasing hardware to support any network or security solution, a security professional must ensure that the hardware’s authenticity can be verified. Just as expensive consumer items such as purses and watches can be counterfeited, so can network equipment. Whereas the dangers with counterfeit consumer items are typically confined to a lack of authenticity and potentially lower quality, the dangers presented by counterfeit network gear can extend to the presence of backdoors in the software or firmware. Always purchase equipment directly from the manufacturer when possible, and when purchasing from resellers, use caution and insist on a certificate of authenticity. In any case where the price seems too good to be true, keep in mind that it may be an indication the gear is not authentic. Trusted Foundry The Trusted Foundry program can help you exercise care in ensuring the authenticity and integrity of the components of hardware purchased from a vendor. This DoD program identifies “trusted vendors” and ensures a “trusted supply chain.” A trusted supply chain begins with trusted design and continues with trusted mask, foundry, packaging/assembly, and test services. It ensures that systems have access to leading-edge integrated circuits from secure, domestic sources. At the time of this writing, 77 vendors have been certified as trusted.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation:

the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 20-2 lists a reference of these key topics and the page numbers on which each is found.

Table 20-2 Key Topics in Chapter 20

Key Topic Element

Description

Page Number

Bulleted list

Main steps of the BIA

530

Bulleted list

Risk assessment goals

532

Section

Quantitative risk analysis

534

Figure 20-1

Risk assessment matrix

537

Bulleted list

Methods used to handle risk

538

Step list

Implementing ISO/IEC 27001:2013

540

Sections

Testing teams

542

DEFINE KEY TERMS

Define the following key terms from this chapter and check your answers in the glossary: business impact analysis (BIA) Business Continuity Planning (BCP) committee recoverability fault tolerance risk assessment qualitative risk analysis quantitative risk analysis single loss expectancy (SLE) annual loss expectancy (ALE) asset value (AV) exposure factor (EF) annualized rate of occurrence (ARO) risk assessment matrix risk avoidance risk transfer risk mitigation risk acceptance memorandum of understanding (MOU) service-level agreement (SLA) organizational governance systems assessment ISO/IEC 27001:2013 ISO/IEC 27002:2013 red team blue team white team tabletop exercise Trusted Foundry program

REVIEW QUESTIONS

1. The vulnerability analysis and risk assessment may be performed by the __________________ or by a separately appointed risk assessment team. 2. List the four main steps of the BIA in order. 3. Match the following terms with their definitions.

Ter ms

Definitions

BI A

Acts as the attacking force during testing

R ed te a m

Lists the critical and necessary business functions, their resource dependencies, and their level of criticality to the overall organization

Bl ue te a m

Group of technicians who referee the encounter during testing

W hi te te a m

Acts as the network defense team during testing

4. ____________________ assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, and safeguard costs.

5. An organization has a web server farm with an AV of $20,000. If the risk assessment has determined that a power failure is a threat agent for the web server farm and the EF for a power failure is 25%, the SLE for this event equals $_____________. 6. Match the following terms with their definitions.

Terms

Definitions

Tabletop exercise

Performs vulnerability analysis and risk assessment

Business Continuity Planning (BCP) committee

Process of controlling an organization’s activities, processes, and operations

Organizational governance

An informal brainstorming session that encourages participation from business leaders and other key employees

7. The _______________________ helps prioritize the application of resources to the most critical vulnerabilities during qualitative risk assessment. 8. List and define at least two ways to handle risk. 9. Match the following terms with their definitions.

Ter ms

Definitions

M O U

Document that specifies a service to be provided by a party

S

Performs vulnerability analysis and risk assessment

L A B C P

Functional analysis that occurs as part of business continuity and disaster recovery

B I A

Document that, while not legally binding, indicates a general agreement between the principals to do something together

10. ALE = ________________

Chapter 21

The Importance of Frameworks, Policies, Procedures, and Controls This chapter covers the following topics related to Objective 5.3 (Explain the importance of frameworks, policies, procedures, and controls) of the CompTIA Cybersecurity Analyst (CySA+) CS0-002 certification exam: Frameworks: Covers both risk-based and prescriptive frameworks. Policies and procedures: Includes code of conduct/ethics, acceptable use policy (AUP), password policy, data ownership, data retention, account management, continuous monitoring, and work product retention. Category: Describes the managerial, operational, technical categories. Control type: Covers the preventative, detective, corrective, deterrent, compensating, and physical control types. Audits and assessments: Discusses regulatory and compliance audits.

Organizations use policies, procedures, and controls to implement security. Policies are broad statements that define what the aim of the security measure is, while procedures define how to carry out the measures. Controls are countermeasures or mitigations that are used to prevent breaches. Creating and implementing policies, procedures, and controls can be a challenge. Help is available, however, from security frameworks

created by various entities. Help is available through templates, examples, and other documents that organizations can use to ensure that they have covered all bases. This chapter explains what policies, procedures, and controls are and describes how security frameworks can be used to create them.

“DO I KNOW THIS ALREADY?” QUIZ The “Do I Know This Already?” quiz enables you to assess whether you should read the entire chapter. If you miss no more than one of these eight self-assessment questions, you might want to move ahead to the “Exam Preparation Tasks.” Table 211 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A. Table 21-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundation Topics Section

Questions

Frameworks

1, 2

Policies and Procedures

3, 4

Category

9, 10

Control Type

5, 6

Audits and Assessments

7, 8

1. Which of the following is not one of the four interrelated domains of the Open Group Architecture Framework (TOGAF) four interrelated domains? 1. Business architecture 2. Data architecture 3. Security architecture 4. Technology architecture

2. Which of the following is not one of the classes of controls described by NIST SP 800-53 Rev 4? 1. Access Control 2. Awareness and Training 3. Contingency Planning 4. Facility Security

3. Which of the following policies is intended to demonstrate a commitment to ethics? 1. Non-compete 2. Non-disclosure 3. Expectation of privacy 4. Code of conduct

4. Which of the following consists of single words that often include a mixture of upper- and lowercase letters? 1. Standard word passwords 2. Complex passwords 3. Passphrase passwords 4. Cognitive passwords

5. Which of the following controls are implemented to administer the organization’s assets and personnel and

include security policies, procedures, standards, baselines, and guidelines that are established by management? 1. Managerial 2. Physical 3. Technical 4. Logical

6. Which operational control type would include security guards? 1. Detective 2. Preventative 3. Deterrent 4. Directive

7. Which of the following reports focuses on internal controls over financial reporting? 1. SOC 1 2. SOC 2 3. SOC 3 4. SOC 4

8. Which of the following standards verifies the controls and processes and requires a written assertion regarding the design and operating effectiveness of the controls being reviewed? 1. SSAE 16 2. HIPAA 3. GLBA 4. CFAA

9. When you implement a new password policy what category of control have you implemented? 1. Managerial 2. Operational 3. Technical 4. Preventative

10. Which of the following controls is a directive control? 1. A new firewall 2. A policy forbidding USB drives 3. A No Admittance sign at the server room door 4. A biometric authentication system

FOUNDATION TOPICS FRAMEWORKS Many organizations have developed security management frameworks and methodologies to help guide security professionals. These frameworks and methodologies include security program development standards, enterprise and security architect development frameworks, security control development methods, corporate governance methods, and process management methods. The following sections discuss the major frameworks and methodologies and explain where they are used. Risk-Based Frameworks Some frameworks are designed to help organizations organize their approach and response to risk. Frameworks in this section are risk-based. National Institute of Standards and Technology (NIST)

NIST SP 800-53 Rev 4 is a security controls development framework developed by the NIST body of the U.S. Department of Commerce. Table 21-2 lists the NIST SP 800-53 Rev 44 control families.

Table 21-2 NIST SP 800-53 Rev 4 Control Families

Family

Access Control Audit and Accountability Awareness and Training Security Assessment and Authorization Configuration Management Contingency Planning Incident Response Maintenance Media Protection Personnel Security Physical and Environmental Protection Planning

Risk Assessment System and Communications Protection System and Information Integrity System and Services Acquisition

NIST SP 800-55 Rev 1 is an information security metrics framework that provides guidance on developing performance measuring procedures with a U.S. government viewpoint. COBIT The governance and management objectives in COBIT 2019 are grouped into five domains. The domains have names with verbs that express the key purpose and areas of activity of the objectives contained in them. The five domain are Evaluate, Direct, and Monitor (EDM) Align, Plan, and Organize (APO) Build, Acquire, and Implement (BAI) Deliver, Service, and Support (DSS) Monitor, Evaluate, and Assess (MEA)

The Cobit 2019 Goals Cascade (shown in Figure 21-1) supports translation of enterprise goals into priorities for alignment goals.

Figure 21-1 The Cobit 2019 Goals Cascade The Open Group Architecture Framework (TOGAF) The Open Group Architecture Framework (TOGAF), another enterprise architecture framework, helps organizations design, plan, implement, and govern an enterprise information architecture. The latest version, TOGAF 9.2, was launched in 2018. TOGAF is based on

Business architecture: Business strategy, governance, organization, and key business processes Application architecture: Individual systems to be deployed, interactions between the application systems, and their relationships to the core business processes Data architecture: Structure of an organization’s logical and physical data assets Technology architecture: Hardware, software, and network infrastructure

The Architecture Development Method (ADM), as prescribed by TOGAF, is applied to develop an enterprise architecture that meets the business and information technology needs of an organization. Figure 21-2 shows the process, which is iterative and cyclic. Each step checks with requirements.

Figure 21-2 TOGAF ADM Model

Prescriptive Frameworks Some frameworks are designed to provide organizations with a list of activities that comprise a prescription for handling certain security issues common to all. The frameworks described in this section are prescriptive. NIST Cybersecurity Framework Version 1.1 NIST created the Framework for Improving Critical Infrastructure Cybersecurity, or simply the NIST Cybersecurity Framework version 1.1, in 2018. It focuses exclusively on IT security and is composed of three parts:

Framework Core: The core presents five cybersecurity functions, each of which is further divided into categories and subcategories. It describes desired outcomes for these functions. As you can see in Figure 21-3, each function has informative references available to help guide the completion of that subcategory of a particular function.

Figure 21-3 Framework Core Structure Implementation Tiers: These tiers are levels of sophistication in the risk management process that organizations can aspire to reach. These tiers can be used as milestones in the development of an organization’s risk management process. The four tiers, from least

developed to most developed, are Partial, Risk Informed, Repeatable, and Adaptive. Framework Profiles: Profiles can be used to compare the current state (or profile) to a target state (profile). This enables an organization to create an action plan to close gaps between the two.

ISO 27000 Series The International Organization for Standardization (ISO), often incorrectly referred to as the International Standards Organization, joined with the International Electrotechnical Commission (IEC) to standardize the British Standard 7799 (BS7799) to a new global standard that is now referred to as the ISO/IEC 27000 Series. ISO/IEC 27000 is a security program development standard on how to develop and maintain an information security management system (ISMS). The 27000 Series includes a list of standards, each of which addresses a particular aspect of ISMS. These standards are either published or in development. The following standards are included as part of the ISO/IEC 27000 Series at this writing:

27000: Published overview of ISMS and vocabulary 27001: Published ISMS requirements 27002: Published code of practice for information security controls 27003: Published ISMS implementation guidelines 27004: Published ISMS measurement guidelines 27005: Published information security risk management guidelines 27006: Published requirements for bodies providing audit and certification of ISMS 27007: Published ISMS auditing guidelines 27008: Published auditor of ISMS guidelines

27010: Published information security management for intersector and interorganizational communications guidelines 27011: Published telecommunications organizations information security management guidelines 27013: Published integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 guidance 27014: Published information security governance guidelines 27015: Published financial services information security management guidelines 27016: Information security economics 27017: In-development cloud computing services information security control guidelines based on ISO/IEC 27002 27018: Published code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors 27019: Published energy industry process control system ISMS guidelines based on ISO/IEC 27002 27021: Published competence requirements for information security management systems professionals 27023: Published mapping the revised editions of ISO/IEC 27001 and ISO/ IEC 27002 27031: Published information and communication technology readiness for business continuity guidelines 27032: Published cybersecurity guidelines 27033-1: Published network security overview and concepts 27033-2: Published network security design and implementation guidelines 27033-3: Published network security threats, design techniques, and control issues guidelines 27033-4: Published securing communications between networks using security gateways 27033-5: Published securing communications across networks using virtual private networks (VPN) 27033-6: In-development securing wireless IP network access

27034-1: Published application security overview and concepts 27034-2: In-development application security organization normative framework guidelines 27034-3: In-development application security management process guidelines 27034-4: In-development application security validation guidelines 27034-5: In-development application security protocols and controls data structure guidelines 27034-6: In-development security guidance for specific applications 27034-7: In-development guidance for application security assurance prediction 27035: Published information security incident management guidelines 27035-1: In-development information security incident management principles 27035-2: In-development information security incident response readiness guidelines 27035-3: In-development computer security incident response team (CSIRT) operations guidelines 27036-1: Published information security for supplier relationships overview and concepts 27036-2: Published information security for supplier relationships common requirements guidelines 27036-3: Published information and communication technology (ICT) supply chain security guidelines 27036-4: In-development guidelines for security of cloud services 27037: Published digital evidence identification, collection, acquisition, and preservation guidelines 27038: Published information security digital redaction specification 27039: Published intrusion detection systems (IDS) selection, deployment, and operations guidelines 27040: Published storage security guidelines

27041: Published guidance on assuring suitability and adequacy of incident investigative method 27042: Published digital evidence analysis and interpretation guidelines 27043: Published incident investigation principles and processes 27044: In-development security information and event management (SIEM) guidelines 27050: In-development electronic discovery (eDiscovery) guidelines 27799: Published information security in health organizations guidelines

These standards are developed by the ISO/IEC bodies, but certification or conformity assessment is provided by third parties. Note You can find more information regarding ISO standards at https://www.iso.org.

SABSA SABSA is an enterprise security architecture framework that uses the six communication questions (What, Where, When, Why, Who, and How) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual). It is a risk-driven architecture. See Table 21-3. ITIL ITIL is a process management development standard developed by the Office of Management and Budget in OMB Circular A-130 and owned by AXELOS since 2013. ITIL has five core publications: ITIL Service Strategy, ITIL Service Design, ITIL Service Transition, ITIL Service Operation, and ITIL Continual Service Improvement. These five core publications contain 26 processes. Although ITIL has a security component,

it is primarily concerned with managing the service-level agreements (SLAs) between an IT department or organization and its customers. An independent review of security controls should be performed every three years. Table 21-4 shows the ITIL v4 key components, ITIL service value system (SVS), and the four dimensions model. Maturity Models Organizations are not alone in the wilderness when it comes to developing processes for assessing vulnerability, selecting controls, adjusting security policies and procedures to support those controls, and performing audits. As described in the sections that follow, several publications and process models have been developed to help develop these skills. Maturity models are used to determine where you are in the continual improvement process as it relates to security and offer help in reaching a higher level of improvement.

Table 21-3 SABSA Framework Matrix

Vie wp oin ts

La yer

Asse ts (Wh at)

Motiv ation (Why )

Proces s (How)

People (Who)

Locati on (Wher e)

Time (Whe n)

B u s i n e s s

C o n t e x t u a l

B us in es s

Ris k mo del

Proc ess mod el

Orga nizati ons and relati onshi ps

Geog raph y

Tim e dep end enci es

A r c h i t e c t

C o n c e p t u a l

B us in es s at tri bu te s pr ofi le

Con trol obje ctiv es

Secu rity strat egies and archi tectu ral layer ing

Secur ity entit y mode l and trust fram ewor k

Secu rity dom ain mod el

Sec urit yrela ted lifet ime s and dea dlin es

D e s i g n e r

L o g i c a l

B us in es s in fo r m ati on m od el

Sec urit y poli cies

Secu rity servi ces

Entit y sche ma and privil ege profil es

Secu rity dom ain defin ition s and asso ciati ons

Sec urit y pro cess ing cycl e

B u i l d e r

P h y s i c a l

B us in es s da ta m od el

Sec urit y rule s, pra ctic es, and pro ced ure s

Secu rity mec hani sm

Users , appli catio ns, and interf aces

Platf orm and netw ork infra struc ture

Con trol stru ctur e exe cuti on

T r a d e s m a n

C o m p o n e n t

D et ail ed da ta st ru ct ur es

Sec urit y stan dar ds

Secu rity tools and prod ucts

Ident ities, funct ions, actio ns, and ACLs

Proc esses , node s, addr esses , and prot ocols

Sec urit y step timi ng and seq uen cing

F a c i l i t i e s m a n a g e r

O p e r a t i o n a l

O pe ra ti on al co nt in ui ty as su ra nc e

Ope rati on risk ma nag eme nt

Secu rity servi ce man age ment and supp ort

Appli catio n and user mana geme nt and supp ort

Site, netw ork, and platf orm secu rity

Sec urit y ope rati ons sch edu le

Table 21-4 ITIL Version 4 Service Value System

ITIL Service Value Chain

ITIL Practices

ITIL Guiding Principles

Governance

Continu al Improv ement

Plan

General Managem ent Practices

Focus on value

Engag e

Service Managem ent Practices

Start where you are

Desig n and Transi tion

Technolog y Managem ent Practices

Progress iteratively with feedback

Obtai n/Buil d

Collabora te and promote visibility

Delive r and Suppo rt

Think and work holisticall y

Impro ve

Keep it simple and practical Optimize and automate

CMMI

Directs and controls the organizatio n

Seven step impro vemen t

The Capability Maturity Model Integration (CMMI) is a comprehensive set of guidelines that address all phases of the software development life cycle. It describes a series of stages, or maturity levels, that a development process can advance through as it goes from the ad hoc (Initial) model to one that incorporates a budgeted plan for continuous improvement. Figure 21-4 shows its five maturity levels. Although the terms are used as synonyms in casual conversation, accreditation and certification are two different concepts in the context of assurance levels and ratings. However, they are closely related. Certification evaluates the technical system components, whereas accreditation occurs when the adequacy of a system’s overall security is accepted by management.

FIGURE 21-4 CMMI Maturity Levels Certification ISO/IEC 27001 ISO/IEC 27001:2013 is the current version of the 27001 standard, and it is one of the most popular standards by which organizations obtain certification for information security. It is covered in Chapter 20.

POLICIES AND PROCEDURES Policies are broad statements of intent, while procedures are details used to carry out that intent. Both mechanisms are used

to guide an organization’s effort with regard to security or any activity over which the organization wishes to gain control. A security policy should cover certain items, and it should be composed of a set of documents that ensure that key components are secured. The following sections cover the key policies and procedures that should be created and included in a security policy. Code of Conduct/Ethics A code of conduct/ethics policy is one intended to demonstrate a commitment to ethics in the activities of the principles. It is typically a broad statement of commitment that is supported by detailed procedures designed to prevent unethical activities. For example, the statement might be “We commit to the highest ethical standards in our dealings with others.” Supporting this would be a procedure that prohibits the acceptance or offer of gifts during a sales negotiation. Personnel hiring procedures should include signing all the appropriate documents, including government-required documentation, no expectation of privacy statements, and nondisclosure agreements (NDAs). Organizations usually have a personnel handbook and other hiring information that must be communicated to the employee. The hiring process should include a formal verification that the employee has completed all the training. Employee IDs and passwords are issued at this time. Code of conduct, conflict of interest, and ethics agreements should also be signed at this time. Also, any non-compete agreements should be verified to ensure that employees do not leave the organization for a competitor. Employees should be given guidelines for periodic performance reviews, compensation, and recognition of achievements. Acceptable Use Policy (AUP)

An acceptable use policy (AUP) is used to inform users of the actions that are allowed and those that are not allowed. It should also provide information on the consequences that may result when these policies are violated. This document should be reviewed and signed by each user during the employee orientation phase of the employment process. The following are examples of the many issues that may be addressed in an AUP:

Proprietary information stored on electronic and computing devices, whether owned or leased by company, the employee, or a third party, remains the sole property of company. The employee has a responsibility to promptly report the theft, loss, or unauthorized disclosure of proprietary information. Access, use, or sharing of proprietary information is allowed only to the extent that it is authorized and necessary to fulfill assigned job duties. Employees are responsible for exercising good judgment regarding the reasonableness of personal use. Authorized individuals in the company may monitor equipment, systems, and network traffic at any time. The company reserves the right to audit networks and systems on a periodic basis to ensure compliance with this policy. All mobile and computing devices that connect to the internal network must comply with the company access policy. System-level and user-level passwords must comply with the password policy. All computing devices must be secured with a password-protected screensaver. Postings by employees from a company e-mail address to newsgroups should contain a disclaimer stating that the opinions expressed are strictly their own and not necessarily those of company. Employees must use extreme caution when opening e-mail attachments received from unknown senders, which may contain

malware.

Password Policy Password authentication is the most popular authentication method implemented today. But often password types can vary from system to system. Before we look at potential password policies, it is vital that you understand all the types of passwords that can be used. Some of the types of passwords that you should be familiar with include the following:

Standard word passwords: As the name implies, these passwords consist of single words that often include a mixture of upper- and lowercase letters. The advantage of this password type is that it is easy to remember. A disadvantage of this password type is that it is easy for attackers to crack or break, resulting in compromised accounts. Combination passwords: These passwords, also called composition passwords, use a mix of dictionary words, usually two that are unrelated. Like standard word passwords, they can include upper- and lowercase letters and numbers. An advantage of this password type is that it is harder to break than a standard word password. A disadvantage is that it can be hard to remember. Static passwords: This password type is the same for each login. It provides a minimum level of security because the password never changes. It is most often seen in peer-to-peer networks. Complex passwords: This password type forces a user to include a mixture of upper- and lowercase letters, numbers, and special characters. For many organizations today, this type of password is enforced as part of the organization’s password policy. An advantage of this password type is that it is very hard to crack. A disadvantage is that it is harder to remember and can often be much harder to enter correctly. Passphrase passwords: This password type requires that a long phrase be used. Because of the password’s length, it is easier to remember but much harder to attack, both of which are definite advantages. Incorporating upper- and lowercase letters, numbers,

and special characters in this type of password can significantly increase authentication security. Cognitive passwords: This password type is a piece of information that can be used to verify an individual’s identity. The user provides this information to the system by answering a series of questions based on her life, such as favorite color, pet’s name, mother’s maiden name, and so on. An advantage of this type is that users can usually easily remember this information. The disadvantage is that someone who has intimate knowledge of the person’s life (spouse, child, sibling, and so on) may be able to provide this information as well. One-time passwords (OTPs): Also called a dynamic password, an OTP is used only once to log in to the access control system. This password type provides the highest level of security because it is discarded after it is used once. Graphical passwords: Also called CAPTCHA passwords (an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart), this type of password uses graphics as part of the authentication mechanism. One popular implementation requires a user to enter a series of characters that appear in a graphic. This implementation ensures that a human, not a machine, is entering the password. Another popular implementation requires the user to select the appropriate graphic for his account from a list of graphics. Numeric passwords: This type of password includes only numbers. Keep in mind that the choices of a password are limited by the number of digits allowed. For example, if all passwords are four digits, then the maximum number of password possibilities is 10,000, from 0000 through 9999. Once an attacker realizes that only numbers are used, cracking user passwords is much easier because the attacker knows the possibilities.

The simpler types of passwords are considered weaker than passphrases, one-time passwords, token devices, and login phrases. Once an organization has decided which type of password to use, the organization must establish its password management policies. Password management considerations include, but may not be limited to, the following:

Password life: How long a password will be valid. For most organizations, passwords are valid for 60 to 90 days. Password history: How long before a password can be reused. Password policies usually remember a certain number of previously used passwords. Authentication period: How long a user can remain logged in. If a user remains logged in for the specified period without activity, the user will be automatically logged out. Password complexity: How the password will be structured. Most organizations require upper- and lowercase letters, numbers, and special characters. The following are some recommendations: Passwords shouldn’t contain the username or parts of the user’s full name, such as his first name. Passwords should use at least three of the four available character types: lowercase letters, uppercase letters, numbers, and symbols. Password length: How long the password must be. Most organizations require 8 to 12 characters.

As part of password management, an organization should establish a procedure for changing passwords. Most organizations implement a service that allows users to automatically reset a password before the password expires. In addition, most organizations should consider establishing a password reset policy that addresses situations in which users forget their passwords or their passwords are compromised. A self-service password reset approach allows users to reset their own passwords, without the assistance of help desk employees. An assisted password reset approach requires that users contact help desk personnel for help changing passwords. Password reset policies can also be affected by other organizational policies, such as account lockout policies.

Account lockout policies are security policies that organizations implement to protect against attacks carried out against passwords. Organizations often configure account lockout policies so that user accounts are locked after a certain number of unsuccessful login attempts. If an account is locked out, the system administrator may need to unlock or reenable the user account. Security professionals should also consider encouraging organizations to require users to reset their passwords if their accounts have been locked. For most organizations, all the password policies, including account lockout policies, are implemented at the enterprise level on the servers that manage the network. Depending on which servers are used to manage the enterprise, security professionals must be aware of the security issues that affect user accounts and password management. Two popular server operating systems are Linux and Windows. For Linux, passwords are stored in the /etc/passwd or /etc/shadow file. Because the /etc/passwd file is a text file that can be easily accessed, you should ensure that any Linux servers use the /etc/shadow file, where the passwords in the file can be protected using a hash. The root user in Linux is a default account that is given administrative-level access to the entire server. If the root account is compromised, all passwords should be changed. Access to the root account should be limited only to system administrators, and root login should be allowed only via a system console. Data Ownership A data ownership policy is closely related to a data classification policy (covered in Chapter 13, “The Importance of Proactive Threat Hunting”), and often the two policies are combined. This is because typically the data owner is tasked with classifying the data. Therefore, the data ownership policy covers how the owner of each piece of data or each data set is identified. In

most cases, the creator of the data is the owner, but some organizations may deem all data created by a department to be owned by the department head. Another way a user may become the owner of data is by introducing into the organization data the user did not create. Perhaps the data was purchased from a third party. In any case, the data ownership policy should outline both how data ownership occurs and the responsibilities of the owner with respect to determining the data classification and identifying those with access to the data. Data Retention A data retention policy outlines how various data types must be retained and may rely on the data classifications described in the data classification policy. Data retention requirements vary based on several factors, including data type, data age, and legal and regulatory requirements. Security professionals must understand where data is stored and the type of data stored. In addition, security professionals should provide guidance on managing and archiving data securely. Therefore, each data retention policy must be established with the help of organizational personnel. A data retention policy usually identifies the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data destruction, the data types covered by the policy, and the retention schedule. Security professionals should work with data owners to develop the appropriate data retention policy for each type of data the organization owns. Examples of data types include, but are not limited to, human resources data, accounts payable/receivable data, sales data, customer data, and e-mail. To design a data retention policy, an organization should answer the following questions:

What are the legal/regulatory requirements and business needs for the data? What are the types of data? What are the retention periods and destruction needs of the data?

The personnel who are most familiar with each data type should work with security professionals to determine the data retention policy. For example, human resources personnel should help design the data retention policy for all human resources data. While designing a data retention policy, the organization must consider the media and hardware that will be used to retain the data. Then, with this information in hand, the data retention policy should be drafted and formally adopted by the organization and/or business unit. Once a data retention policy has been created, personnel must be trained to comply with it. Auditing and monitoring should be configured to ensure data retention policy compliance. Periodically, data owners and processors should review the data retention policy to determine whether any changes need to be made. All data retention policies, implementation plans, training, and auditing should be fully documented. Remember that for most organizations, a one-size-fits-all solution is impossible because of the different types of data. Only those most familiar with each data type can determine the best retention policy for that data. While a security professional should be involved in the design of the data retention policies, the security professional is there to ensure that data security is always considered and that data retention policies satisfy organizational needs. The security professional should only act in an advisory role and should provide expertise when needed. Account Management

The account management policy helps guide the management of identities and accounts. Identity and account management is vital to any authentication process. As a security professional, you must ensure that your organization has a formal procedure to control the creation and allocation of access credentials or identities. If invalid accounts are allowed to be created and are not disabled, security breaches will occur. Most organizations implement a method to review the identification and authentication process to ensure that user accounts are current. Answering questions such as the following is likely to help in the process: Is a current list of authorized users and their access maintained and approved? Are passwords changed at least every 90 days—or earlier, if needed? Are inactive user accounts disabled after a specified period of time?

Any identity management procedure must include processes for creating (provisioning), changing and monitoring (reviewing), and removing users from the access control system (revoking). This is referred to as the access control provisioning life cycle. When initially establishing a user account, new users should be required to provide valid photo identification and should sign a statement regarding password confidentiality. User accounts must be unique. Policies should be in place to standardize the structure of user accounts. For example, all user accounts should be firstname.lastname or some other structure. This ensures that users in an organization will be able to determine a new user’s identification, mainly for communication purposes. After creation, user accounts should be monitored to ensure that they remain active. Inactive accounts should be automatically disabled after a certain period of inactivity, based on business requirements. In addition, any termination policy should include formal procedures to ensure that all user

accounts are disabled or deleted. Elements of proper account management include the following: Establish a formal process for establishing, issuing, and closing user accounts. Periodically review user accounts. Implement a process for tracking access authorization. Periodically rescreen personnel in sensitive positions. Periodically verify the legitimacy of user accounts.

User account reviews are a vital part of account management. User accounts should be reviewed for conformity with the principle of least privilege. This principle specifies that users should only be given the rights and permission required to do their job and no more. User account reviews can be performed on an enterprise wide, systemwide, or application-byapplication basis. The size of the organization greatly affects which of these methods to use. As part of user account reviews, organizations should determine whether all user accounts are active. Continuous Monitoring To support the enforcement of a security policy and its various parts, operations procedures should be defined and practiced on a daily basis. One of the most common operational procedures that should be defined is continuous monitoring. Before continuous monitoring can be successful, an organization must ensure that operational baselines are captured. After all, an organization cannot recognize abnormal patterns or behavior if it doesn’t know what “normal” is. These baselines should also be revisited periodically to ensure that they have not changed. For example, if a single web server is upgraded to a web server farm, a new performance baseline should be captured.

Security analysts must ensure that the organization’s security posture is maintained at all times. This requires continuous monitoring. Auditing and security logs should be reviewed on a regular schedule. Performance metrics should be compared to baselines. Even simple acts such as normal user login/logout times should be monitored. If a user suddenly starts logging in and out at irregular times, the user’s supervisor should be alerted to ensure that the user is authorized. Organizations must always be diligent in monitoring the security of the enterprise. An example of a continuous monitoring tool is Security Compliance Toolkit (SCT). This tool can be used to monitor compliance with a baseline. It works with two other Microsoft tools: Group Policy and Microsoft Endpoint Configuration Manager (MESCM). Work Product Retention Work product is anything you complete for a person or business that has hired you. Organizations need a clear work product retention policy that defines all work product as property of the organization and not of the worker who created the product. This requires employees to sign an agreement to that effect at time of employment.

CATEGORY Control categories refer to how the control responds to the issue, while the control type refers to how the control is implemented. There are three control categories, managerial (or administrative), operational, and technical. Control types are covered in the following “Control Type” section. Managerial Managerial controls (also called administrative controls) are implemented to administer the organization’s assets and

personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. These controls are commonly referred to as soft controls. Specific examples are personnel controls, data classification, data labeling, security awareness training, and supervision. Security awareness training is a very important administrative control. Its purpose is to improve the organization’s attitude about safeguarding data. The benefits of security awareness training include reduction in the number and severity of errors and omissions, better understanding of information value, and better administrator recognition of unauthorized intrusion attempts. A cost-effective way to ensure that employees take security awareness seriously is to create an award or recognition program. Operational Operational controls are measures that are made part of the organizational security stance day to day. These controls include the following control types: Directive controls: Specify acceptable practice within an organization. They are in place to formalize an organization’s security directive mainly to its employees. The most popular directive control is an acceptable use policy (AUP), which lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are efficient only if there is a stated consequence for not following the organization’s directions. Deterrent controls: Deter or discourage an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as an NDA.

Technical

Technical controls (also called logical controls) are software or hardware components used to restrict access. Specific examples of logical controls are firewalls, IDSs, IPSs, encryption, authentication systems, protocols, auditing and monitoring, biometrics, smart cards, and passwords. An example of implementing a technical control is adopting a new security policy that forbids employees from remotely configuring the e-mail server from a third party’s location during work hours. Although auditing and monitoring are logical controls and are often listed together, they are actually two different controls. Auditing is a one-time or periodic event to evaluate security. Monitoring is an ongoing activity that examines either the system or users.

CONTROL TYPE Controls are implemented as countermeasures to identified vulnerabilities. Control mechanisms are divided into six types, as explained in this section. Control type refers to how the control is implemented. Preventative Preventative controls (or preventive controls) prevent an attack from occurring. Examples of preventive controls include locks, badges, biometric systems, encryption, IPSs, antivirus software, personnel security, security guards, passwords, and security awareness training. Detective Detective controls are in place to detect an attack while it is occurring to alert appropriate personnel. Examples of detective

controls include motion detectors, IDSs, logs, guards, investigations, and job rotation. Corrective Corrective controls are in place to reduce the effect of an attack or other undesirable event. Using corrective controls fixes or restores the entity that is attacked. Examples of corrective controls include installing fire extinguishers, isolating or terminating a connection, implementing new firewall rules, and using server images to restore to a previous state. Deterrent Deters or discourages an attacker. Via deterrent controls, attacks can be discovered early in the process. Deterrent controls often trigger preventive and corrective controls. Examples of deterrent controls include user identification and authentication, fences, lighting, and organizational security policies, such as an NDA. Directive Specifies acceptable practice within an organization. They are in place to formalize an organization’s security directive mainly to its employees. The most popular directive control is an acceptable use policy (AUP), which lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Any organizational security policies or procedures usually fall into this access control category. You should keep in mind that directive controls are efficient only if there is a stated consequence for not following the organization’s directions. Physical Physical controls are implemented to protect an organization’s facilities and personnel. Personnel concerns should take priority over all other concerns. Specific examples

of physical controls include perimeter security, badges, swipe cards, guards, dogs, mantraps, biometrics, and cabling.

AUDITS AND ASSESSMENTS Assessing vulnerability, selecting controls, and adjusting security policies and procedures to support those controls without performing verification and quality control are somewhat like driving without a dashboard. Just as you would have no information about the engine temperature, speed, and fuel level, you would be unable to determine whether your efforts are effective. Audits differ from internal assessments in that they are usually best performed by a third party. An organization should conduct internal and third-party audits as part of any security assessment and testing strategy. An audit should test all security controls that are currently in place. Some guidelines to consider as part of a good security audit plan include the following: At minimum, perform annual audits to establish a security baseline. Determine your organization’s objectives for the audit and share them with the auditors. Set the ground rules for the audit before the audit starts, including the dates/times of the audit. Choose auditors who have security experience. Involve business unit managers early in the process. Ensure that auditors rely on experience, not just checklists. Ensure that the auditor’s report reflects risks that your organization has identified. Ensure that the audit is conducted properly. Ensure that the audit covers all systems and all policies and procedures. Examine the report when the audit is complete.

Audits and assessments can fall into two categories, which are covered in the following sections. Regulatory Many regulations today require that audits occur. Organizations used to rely on Statement on Auditing Standards (SAS) 70, which provided auditors information and verification about data center controls and processes related to the data center user and financial reporting. In 2011, the Statement on Standards for Attestation Engagements (SSAE) No. 16 took the place of SAS 70 as the authoritative standard for auditing service organizations and was subsequently updated to version 18. These audits verify that the controls and processes set in place by a data center are actually followed. The Statement on Standards for Attestation Engagements (SSAE) 18 is a standard that verifies the controls and processes and also requires a written assertion regarding the design and operating effectiveness of the controls being reviewed. An SSAE 18 audit results in a Service Organization Control (SOC) 1 report. This report focuses on internal controls over financial reporting. There are two types of SOC 1 reports: SOC 1, Type 1 report: Focuses on the auditors’ opinion of the accuracy and completeness of the data center management’s design of controls, system, and/or service. SOC 1, Type 2 report: Includes Type 1 and an audit on the effectiveness of controls over a certain time period, normally between six months and a year.

Two other report types are also available: SOC 2 and SOC 3. Both of these audits provide benchmarks for controls related to the security, availability, processing integrity, confidentiality, or privacy of a system and its information. A SOC 2 report includes service auditor testing and results, and a SOC 3 report provides only the system description and auditor opinion. A SOC 3 report

is for general use and provides a level of certification for data center operators that assures data center users of facility security, high availability, and process integrity. Table 21-5 briefly compares the three types of SOC reports. Included in the table are two new report types as well.

Table 21-5 SOC Report Comparison Chart

Report Type

What It Reports On

Who Uses It

SOC 1

Internal controls over financial reporting

User auditors and users’ controller office

SOC 2

Security, availability, processing integrity, confidentiality, or privacy controls

Management, regulators, and others; shared under nondisclosure agreement (NDA)

SOC 3

Security, availability, processing integrity, confidentiality, or privacy controls

Publicly available to anyone

SOC for Cybe rsecu rity

An organization’s efforts to prevent, monitor, and effectively handle any cybersecurity threats

Management and practitioners

SOC Cons ultin g& Read iness

The controls it currently has in place, while also preparing it for the actual execution of a SOC report

Management and practitioners

Compliance No organization operates within a bubble. All organizations are affected by laws, regulations, and compliance requirements. Security analysts must understand the laws and regulations of the country or countries they are working in and the industry within which they operate. In many cases, laws and regulations prescribe how specific actions must be taken. In other cases, laws and regulations leave it up to the organization to determine how to comply. Significant pieces of legislation that can affect an organization and its security policy are covered in Chapter 19.

EXAM PREPARATION TASKS As mentioned in the section “How to Use This Book” in the Introduction, you have several choices for exam preparation: the exercises here, Chapter 22, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

REVIEW ALL KEY TOPICS Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 21-6 lists a reference of these key topics and the page numbers on which each is found.

Table 21-6 Key Topics in Chapter 21

Key Topic Element

Description

Page Number

Table 21-2

NIST SP 800-53 Rev 4 control families

552

Bulleted list

TOGAF domains

554

Figure 21-2

TOGAF ADM model

554

Bulleted list

NIST Cybersecurity Framework

555

Bulleted list

The ISO/IEC 27000 Series

556

Table 21-3

SABSA framework matrix

560

Table 21-4

ITIL v4 service value system

561

Figure 21-4

CMMI maturity levels

562

Bulleted list

Issues that may be addressed in an AUP

563

Bulleted list

Password types

564

Bulleted list

Password management policies

566

Section

Control types

571

Table 21-5

SOC report comparison chart

574

DEFINE KEY TERMS Define the following key terms from this chapter and check your answers in the glossary: frameworks NIST SP 800-53 COBIT The Open Group Architecture Framework (TOGAF)

NIST Cybersecurity Framework version 1.1 ISO/IEC 27000 Series SABSA ITIL maturity models Capability Maturity Model Integration (CMMI) certification accreditation National Information Assurance Certification and Accreditation Process (NIACAP) ISO/IEC 27001:2013 code of conduct/ethics acceptable use policy (AUP) standard word passwords combination passwords static passwords complex passwords passphrase passwords cognitive passwords one-time passwords (OTPs) graphical passwords numeric passwords password life password history authentication period password complexity password length work product retention managerial (administrative type) controls operational controls directive controls deterrent controls technical controls

physical controls preventative controls detective controls corrective controls SOC 1 Type 1 report SOC 1 Type 2 report

REVIEW QUESTIONS 1. ________________ is a security controls development framework developed by NIST. 2. List the family and class of at least two of the NIST SP 80053 control families. 3. Match the following terms with their definitions.

Terms

Definitions

Code of conduct/ethi cs

Divides the controls into three classes: technical, operational, and management

Acceptable use policy

Work done for and owned by the organization

Work product retention

Describes what can be done by users

NIST SP 800-53

Details standards of business conduct

4. List at least two guidelines to consider as part of a good security audit plan.

5. List at least one SOC report, including what it reports on and who uses it. 6. Match the following terms with their definitions.

Terms

Definitions

Control Objectives for Information and Related Technology (COBIT)

Focuses exclusively on IT security

The Open Group Architecture Framework (TOGAF)

An enterprise architecture framework that helps organizations design, plan, implement, and govern an enterprise information architecture

NIST Cybersecurity Framework

A security program development standard on how to develop and maintain an information security management system (ISMS)

ISO/IEC 27000 Series

Security controls development framework that uses a process model to subdivide IT into four domains

7. _______________________ controls are software or hardware components used to restrict access. 8. List and define at least two password policies. 9. Match the following terms with their definitions.

Terms Sherwood Applied Business

Definitions Comprehensive set of guidelines that address all phases of the software development life cycle

Security Architecture (SABSA) Information Technology Infrastructure Library (ITIL)

Provides a standard set of activities, general tasks, and a management structure to certify and accredit systems that maintain the information assurance and security posture of a system or site

Capability Maturity Model Integration (CMMI)

Enterprise security architecture framework that uses the six communication questions (What, Where, When, Why, Who, and How) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual)

National Information Assurance Certification and Accreditation Process (NIACAP)

Process management development standard developed by the Office of Management and Budget in OMB Circular A-130

10. A(n) ______________________ policy is one intended to demonstrate a commitment to ethics in the activities of the principles.

Chapter 22

Final Preparation The purpose of this chapter is to demystify the certification preparation process for you. This includes taking a more detailed look at the actual certification exam itself. This chapter shares some helpful ideas on ensuring that you are ready for the exam. Many people become anxious about taking exams, so this chapter gives you the tools to build confidence for exam day. The first 21 chapters of this book cover the technologies, protocols, design concepts, and considerations required to be prepared to pass the CompTIA Cybersecurity Analyst (CySA+) CS0-002 exam. While these chapters supply the detailed information, most people need more preparation than just reading the first 21 chapters of this book. This chapter details a set of tools and a study plan to help you complete your preparation for the exams. This short chapter has four main sections. The first section lists the CompTIA CySA+ CS0-002 exam information and breakdown. The second section shares some important tips to keep in mind to ensure you are ready for this exam. The third section discusses exam preparation tools useful at this point in the study process. The final section of this chapter lists a suggested study plan now that you have completed all the earlier chapters in this book. Note Appendix C, “Memory Tables,” and Appendix D, “Memory Tables Answer Key,” exist as soft-copy appendixes on the website for this book, which you can

access by going to https://www.pearsonITcertification.com/register, registering your book, and entering this book’s ISBN: 9780136747161.

EXAM INFORMATION Here are details you should be aware of regarding the exam that maps to this text. Exam code: CS0-002 Question types: Multiple-choice and performance-based questions Number of questions: Maximum of 85 Time limit: 165 minutes Required passing score: 750 (on a scale of 100 to 900) Exam fee (subject to change): $359.00 USD Note The following information is copied from the CompTIA CySA+ web page.

As attackers have learned to evade traditional signature-based solutions, such as firewalls and anti-virus software, an analytics-based approach within the IT security industry is increasingly important for organizations. CompTIA CySA+ applies behavioral analytics to networks to improve the overall state of security through identifying and combating malware and advanced persistent threats (APTs), resulting in an enhanced threat visibility across a broad attack surface. It will validate an IT professional’s ability to proactively defend and continuously improve the security of an organization. CySA+ will verify the successful candidate has the knowledge and skills required to

Leverage intelligence and threat detection techniques Analyze and interpret data Identify and address vulnerabilities Suggest preventative measures Effectively respond to and recover from incidents

GETTING READY Here are some important tips to keep in mind to ensure that you are ready for this rewarding exam: Note Recently CompTIA has expanded its online testing offerings to include the CySA+ exam. For information on this option see https://www.comptia.org/testing/testing-options/take-online-exam.

Build and use a study tracker: Consider taking the exam objectives and building a study tracker. This can be a notebook outlining the objectives, with your notes written out. Using pencil and paper can help concentration by making you take the time to think about potential answers to questions that might be asked on the exam for each objective. A study tracker will help ensure that you have not missed anything and that you are confident for your exam. There are other ways, including a sample Study Planner as a website supplement to this book (Appendix E). Whatever works best for you is the right option to use. Think about your time budget for questions in the exam: When you do the math, you realize that you have a bit less than 2 minutes per exam question. While this does not sound like enough time, keep in mind that many of the questions will be very straightforward, and you will take 15 to 30 seconds on those. This builds time for other questions as you take your exam. Watch the clock: Periodically check the time remaining as you are taking the exam. You might even find that you can slow down pretty dramatically if you have built up a nice block of extra time. Consider ear plugs: Some people are sensitive to noise when concentrating. If you are one of them, ear plugs may help. There

might be other test takers in the center with you and you do not want to be distracted by them. Plan your travel time: Give yourself extra time to find the center and get checked in. Be sure to arrive early. As you test more at that center, you can certainly start cutting it closer time-wise. Get rest: Most students report success with getting plenty of rest the night before the exam. All-night cram sessions are not typically successful. Bring in valuables but get ready to lock them up: The testing center will take your phone, your smart watch, your wallet, and other such items and will provide a secure place for them. Use the restroom before going in: If you think you will need a break during the test, clarify the rules with the test proctor. Take your time getting settled: Once you are seated, take a breath and organize your thoughts. Remind yourself that you have worked hard for this opportunity and expect to do well. The 165minute timer doesn’t start until you tell it to after a brief tutorial. The timer starts when you agree to see the first question. Take notes: You will be given note-taking materials, so take advantage of them. Sketch out lists and mnemonics that you memorized. The note paper can be used for any calculations you need, but it is okay to write notes to yourself before beginning. Practice exam questions are great—so use them: This text provides many practice exam questions. Be sure to go through them thoroughly. Remember, you shouldn’t blindly memorize answers; instead, let the questions really demonstrate where you are weak in your knowledge and then study up on those areas.

TOOLS FOR FINAL PREPARATION This section lists some information about the available tools and how to access the tools. Pearson Test Prep Practice Test Software and Questions on the Website Register this book to get access to the Pearson Test Prep practice test software (software that displays and grades a set of exam-realistic, multiple-choice questions). Using the Pearson

Test Prep practice test software, you can either study by going through the questions in Study mode or take a simulated (timed) CySA+ exam. The Pearson Test Prep practice test software comes with two full practice exams. These practice tests are available to you either online or as an offline Windows application. To access the practice exams that were developed with this book, please see the instructions in the card inserted in the sleeve in the back of the book. This card includes a unique access code that enables you to activate your exams in the Pearson Test Prep software. You will find detailed instructions for accessing the Pearson Test Prep software in the Introduction to this book. Memory Tables Like most exam Cert Guides, this book purposely organizes information into tables and lists for easier study and review. Rereading these tables and lists can be very useful before the exam. However, it is easy to skim over the tables without paying attention to every detail, especially when you remember having seen the table’s contents when reading the chapter. Instead of just reading the tables in the various chapters, this book’s Appendixes C and D give you another review tool. Appendix C lists partially completed versions of many of the tables from the book. You can open Appendix C (a PDF available on the book website after registering) and print the appendix. For review, you can attempt to complete the tables. This exercise can help you focus on the review. It also exercises the memory connectors in your brain, and it prompts you to think about the information from context clues, which forces a little more contemplation about the facts. Appendix D, also a PDF located on the book website, lists the completed tables to check yourself. You can also just refer to the tables as printed in the book.

Chapter-Ending Review Tools Chapters 1 through 21 each have several features in the “Exam Preparation Tasks” section at the end of the chapter. You might have already worked through these in each chapter. It can also be helpful to use these tools again as you make your final preparations for the exam.

SUGGESTED PLAN FOR FINAL REVIEW/STUDY This section lists a suggested study plan that should guide you, until you take the CompTIA Cybersecurity Analyst (CySA+) CS0-002 exam. Certainly, you can ignore this plan, use it as is, or just take suggestions from it. The plan uses four steps: Step 1. Review the key topics and the “Do I Know This Already?” questions: You can use the table that lists the key topics in each chapter, or just flip the pages looking for key topics. Also, reviewing the DIKTA quiz questions from the beginning of the chapter can be helpful for review. Step 2. Complete memory tables: Open Appendix C from the book website and print the entire thing, or print the tables by major part. Then complete the tables. Step 3. Review the “Review Questions” sections: Go through the review questions at the end of each chapter to identify areas where you need more study. Step 4. Use the Pearson Test Prep practice test software to practice: You can use the Pearson Test Prep practice test software to study by using a bank of unique exam-realistic questions available only with this book.

SUMMARY The tools and suggestions listed in this chapter have been designed with one goal in mind: to help you develop the skills required to pass the CompTIA Cybersecurity Analyst (CySA+) CS0-002 exam. This book has been developed from the beginning to not just tell you the facts but also help you learn how to apply them. No matter what your experience level leading up to when you take the exam, it is my hope that the broad range of preparation tools and the structure of the book help you pass the exam with ease. I hope you do well on the exam.

Appendix A

Answers to the “Do I Know This Already?” Quizzes and Review Questions CHAPTER 1 Do I Know This Already? 1. D. Proprietary/closed-source intelligence sources are those that are not publicly available and usually require a fee to access. Examples of this are platforms maintained by private organizations that supply constantly updating intelligence information. In many cases this data is developed from all of the provider’s customers and other sources. 2. A. Trusted Automated eXchange of Indicator Information (TAXII) is an application protocol for exchanging cyber threat information (CTI) over HTTPS. It defines two primary services, Collections and Channels. 3. B. Because zero-day attacks occur before a fix or patch has been released, it is difficult to prevent them. As with many other attacks, keeping all software and firmware up to date with the latest updates and patches is important. 4. C. Hacktivists are activists for a cause, such as animal rights, that use hacking as a means to get their message out and affect the businesses that they feel are detrimental to their cause. 5. B. Collection is the stage in which most of the hard work occurs. It is also the stage at which recent advances in

artificial intelligence (AI) and automation have changed the game. It’s time-consuming work that involves web searches, interviews, identifying sources, and monitoring, to name a few activities. 6. B. Commodity malware is malware that is widely available either for purchase or by free download. It is not customized or tailored to a specific attack. It does not require complete understanding of its processes and is used by a wide range of threat actors with a range of skill levels. 7. A. In the healthcare community, where protection of patient data is legally required by HIPAA, an example of a sharing platform is H-ISAC (Health Information Sharing and Analysis Center). It is a global operation focused on sharing timely, actionable, and relevant information among its members, including intelligence on threats, incidents, and vulnerabilities. Review Questions 1. Possible answers can include the following: Print and online media Internet blogs and discussion groups Unclassified government data Academic and professional publications Industry group data

2. Structured Threat Information eXchange (STIX). While STIX was originally sponsored by the office of Cybersecurity and Communications within the U.S. Department of Homeland Security, it is now under the management of the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit consortium that seeks to advance the development, convergence, and adoption of open standards for the Internet.

3. STIX: An XML-based programming language that can be used to communicate cybersecurity data among those using the language. OpenIOC: An open framework that is designed for sharing threat intelligence information in a machine-readable format. Cyber Intelligence Analytics Platform (CAP) v2.0: Uses its proprietary artificial intelligence and machine learning algorithms to help organizations unravel cyber risks and threats and enables proactive cyber posture management. 4. Insider. Insiders who are already inside the network perimeter and already know the network are a critical danger. 5. The models are as follows: Hub and spoke: One central clearinghouse Source/subscriber: One organization is the single source of information Peer-to-peer: Multiple organizations share their information

6. Hacktivists. are activists for a cause, such as animal rights, that use hacking as a means to get their message out and affect the businesses that they feel are detrimental to their causes. 7. Zero-day: Threat with no known solution APT: Threat carried out over a long period of time Terrorist: Hacks not for monetary gain but simply to destroy or deface 8. Nation-state. Nation-state or state sponsors are usually foreign governments. They are interested in pilfering data, including intellectual property and research and

development data, from major manufacturers, tech companies, government agencies, and defense contractors. They have the most resources and are the best organized of any of the threat actor groups. 9. Requirements. Before beginning intelligence activities, security professionals must identify what the immediate issue is and define as closely as possible the requirements of the information that needs to be collected and analyzed. This means the types of data to be sought is driven by what we might fear the most or by recent breaches or issues. The amount of potential information may be so vast that unless we filter it to what is relevant, we may be unable to fully understand what is occurring in the environment. 10. CISA. The Cybersecurity and Infrastructure Security Agency (CISA) maintains a number of chartered organizations, among them the Aviation Government Coordinating Council.

CHAPTER 2 Do I Know This Already? 1. C. MITRE ATT&CK is a knowledge base of adversary tactics and techniques based on real-world observations. It is an open system, and attack matrices based on it have been created for various industries. It is designed as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community. 2. A. Some threat intelligence data is generated from past activities. Reputational scores may be generated for traffic sourced from certain IP address ranges, domain names, and URLs. 3. C. First, you must have a grasp of the capabilities of the attacker or adversary. Threat actors have widely varying

capabilities. When carrying out threat modeling, you may decide to develop a more comprehensive list of threat actors to help in scenario development. 4. B. Security engineering is the process of architecting security features into the design of a system or set of systems. It has as its goal an emphasis on security from the ground up, sometimes stated as “building in security.” Unless the very latest threats are shared with this function, engineers cannot be expected to build in features that prevent threats from being realized. Review Questions 1.

Corner

Descriptions

Adversary

Describes the intent of the attack

Victim

Describes the target or targets

Capabilitie s

Describes attacker intrusion tools and techniques

Infrastruct ure

Describes the set of systems an attacker uses to launch attacks

2. adversary. Adversary focuses on the intent of the attack. 3. Behavioral. Some threat intelligence data is based not on reputation but on the behavior of the traffic in question. Behavioral analysis is another term for anomaly analysis. 4. Indicator of compromise (IoC). An IoC is any activity, artifact, or log entry that is typically associated with an

attack of some sort. 5. Examples include the following: Virus signatures Known malicious file types Domain names of known botnet servers

An indicator of compromise (IoC) is any activity, artifact, or log entry that is typically associated with an attack of some sort. 6.

Acronym

Description

TLP

Set of designations used to ensure that sensitive information is shared with the appropriate audience

MITR E ATT& CK

Knowledge base of adversary tactics and techniques based on real-world observations

CVSS

System of ranking vulnerabilities that are discovered based on predefined metrics

IoC

Any activity, artifact, or log entry that is typically associated with an attack of some sort

7. Pr:L stands for Privileges Required, where L stands for Low and the attacker requires privileges that provide basic user capabilities that could normally affect only settings and files owned by a user. The Common Vulnerability Scoring System (CVSS) is a system of ranking vulnerabilities that are

discovered based on predefined metrics. This system ensures that the most critical vulnerabilities can be easily identified and addressed after a vulnerability test is met. 8. Base CVSS is composed of three metric groups: Base: Characteristics of a vulnerability that are constant over time and user environments Temporal: Characteristics of a vulnerability that change over time but not among user environments Environmental: Characteristics of a vulnerability that are relevant and unique to a particular user’s environment

9. AV. Attack Vector (AV) describes how the attacker would exploit the vulnerability and has four possible values: L: Stands for Local and means that the attacker must have physical or logical access to the affected system A: Stands for Adjacent network and means that the attacker must be on the local network N: Stands for Network and means that the attacker can cause the vulnerability from any network P: Stands for Physical and requires the attacker to physically touch or manipulate the vulnerable component

10.

Value

Description

P

Means the attack requires the attacker to physically touch or manipulate the vulnerable component

L

Means that the attacker must have physical or logical access to the affected system

N

Means that the attacker can cause the vulnerability from any network

A

Means that the attacker must be on the local network

CHAPTER 3 Do I Know This Already? 1. C. The relative value of the information that could be discovered through the compromise of the components under assessment helps to identify the number and type of resources that should be devoted to the issue. 2. A. A true positive occurs when the scanner correctly identifies a vulnerability. True means the scanner is correct and positive means it identified a vulnerability. 3. A. The patch management life cycle includes the following steps: Step 1. Determine the priority of the patches and schedule the patches for deployment. Step 2. Test the patches prior to deployment to ensure that they work properly and do not cause system or security issues. Step 3. Install the patches in the live environment. Step 4. After patches are deployed, ensure that they work properly. 4. D. While running a scan does distract from day-to-day operations, it is not considered to be a risk. Failure to scan actually increases risk. 5. B. A memorandum of understanding (MOU) is a document that, while not legally binding, indicates a general agreement between the principals to do something together. An

organization may have MOUs with multiple organizations, and MOUs may in some instances contain security requirements that inhibit or prevent the deployment of certain measures. Review Questions 1. Asset criticality. Data and assets should be classified based on their value to the organization and their sensitivity to disclosure. Assigning a value to data and assets allows an organization to determine the resources that should be used to protect them 2. Acceptable answers include the following: Will you be able to recover the data in case of disaster? How long will it take to recover the data? What is the effect of this downtime, including loss of public standing?

Criticality is a measure of the importance of the data. Data that is considered sensitive may not necessarily be considered critical. Assigning a level of criticality to a particular data set requires considering the answers to the preceding questions. 3. Passive. The biggest benefit of a passive vulnerability scanner is its ability to do its work without impacting the monitored network. Some examples of PVSs are the Nessus Network Monitor (formerly Tenable PVS) and NetScanTools Pro. 4.

Terms

False

Definitions

Occurs when the scanner identifies a vulnerability

positive

that does not exist.

True positive

Occurs when the scanner correctly identifies a vulnerability.

False negativ e

Occurs when the scanner does not identify a vulnerability that exists.

True negativ e

Occurs when the scanner correctly determines that a vulnerability does not exist.

True means the scanner is correct in its assessment and false means it is incorrect. Positive means a vulnerability was detected and negative means that one was not detected. 5. Configuration baselines. A baseline is a floor or minimum standard that is required. With respect to configuration baselines, they are security settings that are required on devices of various types. These settings should be driven by results of vulnerability and risk management processes. 6. Step 1. Determine the priority of the patches and schedule the patches for deployment. Step 2. Test the patches prior to deployment to ensure that they work properly and do not cause system or security issues. Step 3. Install the patches in the live environment. Step 4. After the patches are deployed, ensure that they work properly. To ensure that all devices have the latest patches installed, you should deploy a formal system to ensure that all systems

receive the latest updates after thorough testing in a nonproduction environment. 7. Compensating control. A compensating control, also known as a countermeasure or safeguard, reduces the potential risk. 8. Acceptable answers include the following: Remove unnecessary applications. Disable unnecessary services. Block unrequired ports. Tightly control the connecting of external storage devices and media (if it’s allowed at all).

Another of the ongoing goals of operations security is to ensure that all systems have been hardened to the extent that is possible and still provide functionality. The hardening can be accomplished both on physical and logical bases. 9.

Method

Definition

Risk transfe r

Passing on the risk to a third party, such as an insurance company

Risk mitigat ion

Defining the acceptable risk level the organization can tolerate and reducing the risk to that level

Risk avoida nce

Terminating the activity that causes a risk or choosing an alternative that is not as risky

Risk accept

Understanding and accepting the level of risk as well as the cost of damages that can occur

ance

In many cases the response is dictated by balancing the value of the information against the cost of the countermeasure. 10. Acceptable answers include the following: A false sense of security can be introduced because scans are not error free. Many tools rely on a database of known vulnerabilities and are only as valid as the latest update. Identifying vulnerabilities does not in and of itself reduce your risk or improve your security.

While vulnerability scanning is an advisable and valid process, these risks should be noted.

CHAPTER 4 Do I Know This Already? 1. B. Synthetic transaction monitoring, which is a type of proactive monitoring, uses external agents to run scripted transactions against an application. This type of monitoring is often preferred for websites and applications. 2. B. Qualys is an example of a cloud-based vulnerability scanner. Sensors are placed throughout the network, and they upload data to the cloud for analysis. 3. C. The steps in the software development life cycle (SDLC) are Step 1. Plan/initiate project Step 2. Gather requirements Step 3. Design Step 4. Develop

Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Perform change management and configuration management/replacement 4. C. Network enumeration is the process of discovering and listing pieces of information that might be helpful in a network attack or compromise. 5. B. Aircrack-ng focuses on these areas of Wi-Fi security: Monitoring: Packet capture and export of data to text files for further processing by third-party tools Attacking: Replay attacks, deauthentication, fake access points, and others via packet injection Testing: Checking Wi-Fi cards and driver capabilities (capture and injection) Cracking: WEP and WPA PSK (WPA1 and 2)

6. B. ScoutSuite is a data collection tool that allows you to use longitudinal survey panels to track and monitor the cloud environment. ScoutSuite is open source and utilizes APIs made available by the cloud provider. Review Questions 1. Open Web Application Security Project (OWASP). OWASP produces an interception proxy called Zed Attack Proxy (ZAP). 2.

Tools

B

Definitions

Can scan an application for vulnerabilities and can also be

ur p

used to crawl an application (to discover content)

N ik to

Vulnerability scanner that is dedicated to web servers

Z A P

An interception proxy produced by OWASP

A ra c h ni

A Ruby framework for assessing the security of a web application

3. Possible answers are as follows: Installation costs are low because there is no installation and configuration for the client to complete. Maintenance costs are low because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). Upgrades are included in a subscription. Costs are distributed among all customers. It does not require the client to provide onsite equipment.

In the cloud-based approach, the vulnerability management platform is in the cloud. 4. Answer: Step 1. Plan/initiate project Step 2. Gather requirements Step 3. Design

Step 4. Develop Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Perform change management and configuration management/replacement The software development life cycle (SDLC) is a set of ordered steps to help ensure that software is developed to enhance both security and functionality. 5. Static. Static code analysis is done without the code executing. Code review and testing must occur throughout the entire SDLC. 6. Acceptable answers are as follows: Data flow analysis: This analysis looks at runtime information while the software is in a static state. Control flow graph: A graph of the components and their relationships can be developed and used for testing by focusing on the entry and exit points of each component or module. Taint analysis: This analysis attempts to identify variables that are tainted with user-controllable input. Lexical analysis: This analysis converts source code into tokens of information to abstract the code and make it easier to manipulate for testing purposes.

Static code review can be done with scanning tools that look for common issues. These tools can use a variety of approaches to find bugs. 7.

Review Types

Definitions

Reverse engineering

Analyzing a subject system to identify the system’s components and their interrelationships

Fuzzing

Injecting invalid or unexpected input

Real user monitoring

Monitoring method that captures and analyzes every transaction

Synthetic transaction monitoring

Runs scripted transactions against an application

8. Possible answers are as follows: Implement fuzz testing to help identify problems. Adhere to safe coding and project management practices. Deploy application-level firewalls.

9.

Tools

Definitions

n m a p

When used for scanning, it typically locates the devices, locates the open ports on the devices, and determines the OS on each host

h pi n g

Command-line-oriented TCP/IP packet assembler/analyzer

R es p

Tool that can be used for answering NBT and LLMNR name requests

o n d er R ea ve r

Used to attack Wi-Fi Protected Setup (WPS)

10. Possible answers are as follows: Amazon Web Services (AWS) Microsoft Azure Google Cloud Platform Alibaba Cloud (alpha) Oracle Cloud Infrastructure (alpha)

ScoutSuite is a data collection tool that allows you to use longitudinal survey panels to track and monitor the cloud environment. It is open source and utilizes APIs made available by the cloud provider.

CHAPTER 5 Do I Know This Already? 1. B. USB On-The-Go (USB OTG) is a specification first used in late 2001 that allows USB devices, such as tablets and smartphones, to act as either a USB host or a USB device. With respect to smartphones, USB OTG has been used to hack around an iPhone security feature that requires a valid iPhone username and password to use a device after a factory reset. 2. A. The five groups of IoT deployments are as follows:

Smart home: Includes products that are used in the home. They range from personal assistance devices, such as Amazon Alexa, to HVAC components, such as Nest thermostats. These devices are designed for home management and automation. Wearables: Includes products that are worn by users. They range from watches, such as the Apple Watch, to personal fitness devices, such as the Fitbit. Smart cities: Includes devices that help resolve traffic congestion issues and reduce noise, crime, and pollution. They include smart energy, smart transportation, smart data, smart infrastructure, and smart mobility devices. Connected cars: Includes vehicles that include Internet access and data sharing capabilities. Technologies include GPS devices, OnStar, and AT&T connected cars. Business automation: Includes devices that automate HVAC, lighting, access control, and fire detection for organizations.

3. C. An embedded system is a piece of software that is built into a larger piece of software and is in charge of performing some specific function on behalf of the larger system. The embedded part of the solution might address specific hardware communications and might require drivers to talk between the larger system and some specific hardware. 4. C. A real-time operating system (RTOS) is designed to process data as it comes in, typically without buffer delays. Traditionally, security hasn’t been a top concern in the design of RTOSs and, consequently, some vulnerabilities have surfaced. For example, VxWorks 6.5 and later versions have found to be susceptible to a vulnerability that allows remote attackers full control over targeted devices. 5. C. Systems-on-a Chip (SoCs) have become typical inside cell phone electronics for their reduced energy use. An example is a baseband processor. This is a chip in a network interface that manages all the radio functions. A baseband processor typically uses its own RAM and firmware.

6. B. A field programmable gate array (FPGA) is a type of programmable logic device (PLD) that is programmed by blowing fuse connections on the chip or using an antifuse that makes a connection when a high voltage is applied to the junction. (A PLD is an integrated circuit with connections or internal logic gates that can be changed through a programming process.) 7. C. With a mantrap, the user is authenticated at the first door and then allowed into the room. At that point, additional verification occurs (such as a guard visually identifying the person), and then the person is allowed through the second door. 8. A. HVAC systems usually use a protocol called Building Automation and Control Networks (BACnet), which is an application, network, and media access control (MAC) layer communications service. It can operate over a number of Layer 2 protocols, including Ethernet. 9. A. Controller Area Network (CAN bus) is designed to allow vehicle microcontrollers and devices to communicate with each other’s applications without a host computer. 10. B. Automation tools such as Puppet, Chef, and Ansible and scripting are automating once manual networking tasks such as log analyses, patch application, and intrusion prevention. 11. A. The Incident Command System (ICS) is designed to provide a way to enable effective and efficient domestic incident management by integrating a combination of facilities, equipment, personnel, procedures, and communications operating within a common organizational structure. 12. B. An industrial control system includes the following components:

Sensors: Sensors typically have digital or analog I/O and are not in a form that can be easily communicated over long distances. Remote terminal units (RTUs): RTUs connect to the sensors and convert sensor data to digital data, including telemetry hardware. Programmable logic controllers (PLCs): PLCs connect to the sensors and convert sensor data to digital data; they do not include telemetry hardware. Telemetry system: Such a system connects RTUs and PLCs to control centers and the enterprise. Human interface: Such an interface presents data to the operator.

Review Questions 1. Possible answers include the following: Insecure web browsing Insecure Wi-Fi connectivity Lost or stolen devices holding company data Corrupt application downloads and installations Missing security patches Constant upgrading of personal devices Use of location services

While the most common types of corporate information stored on personal devices are corporate emails and company contact information, it is alarming to note that almost half of these devices also contain customer data, network login credentials, and corporate data accessed through business applications. 2. A lost or stolen device containing irreplaceable or sensitive data. Organizations should ensure that they can remotely wipe the device when this occurs. 3.

Terms

Definitions

U S B O T G

A specification first used in late 2001 that allows USB devices, such as tablets or smartphones, to act as either a USB host or a USB device

B Y O D

Policies designed to allow personal devices in the network

M D M

Used to control mobile device settings, applications, and other parameters when those devices are attached to the enterprise network

I C S

Designed to provide a way to enable effective and efficient domestic incident management by integrating a combination of facilities, equipment, personnel, procedures, and communications operating within a common organizational structure

4. The IoT has presented attackers with a new medium through which to carry out an attack. Often the developers of the IoT devices add the IoT functionality without thoroughly considering the security implications of such functionality or without building in any security controls to protect the IoT devices. 5. Geotagging. Geotagging is the process of adding geographical identification metadata to various media and is enabled by default on many smartphones (to the surprise of some users). In many cases, this location information can be

used to locate where images, videos, websites, and SMS messages originate. 6.

Terms

Definitions

Emb edde d syste m

A piece of software built into a larger piece of software

SoC

An integrated circuit (also known as a chip) that integrates all components of a computer or other electronic system

RTU

Industrial control system component that connects to the sensors and converts sensor data to digital data, including telemetry

SCA DA

A system that operates with coded signals over communication channels to provide control of remote equipment

7. Short Message Service (SMS) technologies present security challenges. Because messages are sent in clear text, both are susceptible to spoofing and spamming. 8. Possible answers include the following: Smart home: Includes products that are used in the home. They range from personal assistance devices, such as Amazon Alexa, to HVAC components, such as Nest thermostats. These devices are designed for home management and automation. Wearables: Includes products that are worn by users. They range from watches, such as the Apple Watch, to personal fitness devices,

like the Fitbit. Smart cities: Includes devices that help resolve traffic congestion issues and reduce noise, crime, and pollution. They include smart energy, smart transportation, smart data, smart infrastructure, and smart mobility devices. Connected cars: Includes vehicles that include Internet access and data sharing capabilities. Technologies include GPS devices, OnStar, and AT&T connected cars. Business automation: Includes devices that automate HVAC, lighting, access control, and fire detection for organizations.

IoT deployments include a wide variety of devices, but are broadly categorized into these five groups. 9. mantrap. The user is authenticated at the first door and then allowed into the room. At that point, additional verification occurs (such as a guard visually identifying the person), and then the person is allowed through the second door. 10. BACnet/IP (B/IP). The BACnet standard makes exclusive use of MAC addresses for all data links, including Ethernet. To support IP, IP addresses are needed, which is why B/IP was developed.

CHAPTER 6 Do I Know This Already? 1. D. A hybrid cloud is a cloud computing model in which an organization provides and manages some resources in-house and has others provided externally via a public cloud. This model requires a relationship with the service provider as well as an in-house cloud deployment specialist. 2. B. With Platform as a Service (PaaS), the vendor provides the hardware platform or data center and the software running on the platform, including the operating systems and infrastructure software. The company is still involved in managing the system. An example of this is a company that

contacts a third party to provide a development platform for internal developers to use for development and testing. 3. A. Function as a Service (FaaS) is an extension of Platform as a Service (PaaS) that goes further and completely abstracts the virtual server from the developers. 4. A. In another reordering of the way data centers are handled, Infrastructure as Code (IaC) manages and provisions computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. 5. A. In-memory processing is an approach in which all data in a set is processed from memory rather than from the hard drive. It assumes that all the data will be available in memory rather than just the most recently used data, as is usually done using RAM or cache memory. This results in faster reporting and decision making in business. Securing this requires encrypting the data in RAM. The Data Protection API (DPAPI) lets you encrypt data using the user’s login credentials. 6. A. NIST SP 800-57 REV. 5 contains recommendations for key management in three parts: Part 1: This publication covers general recommendations for key management. Part 2: This publication covers the best practices for a key management organization. Part 3: This publication covers the application-specific key management guidance.

7. B. Interfaces and application programming interfaces (APIs) tend to be the most exposed parts of a system because they’re usually accessible from the open Internet. 8. B. Without proper auditing, you have no accountability.

Review Questions 1. Software as a Service (SaaS). With SaaS, the vendor provides an end to end solution. The vendor may provide an email system, for example, in which it hosts and manages everything for the customer. 2.

Terms

Definitions

F a a S

Completely abstracts the virtual server from the developers

I a C

Manages and provisions computer data centers through machine-readable definition files

P a a S

The vendor provides the hardware platform or data center and the software running on the platform, including the operating systems and infrastructure software

I a a S

The vendor provides the hardware platform or data center, and the customer installs and manages its own operating systems and application systems

3. Possible answers are Lower cost Faster speed Risk reduction (remove errors and security violations)

In another reordering of the way data centers are handled, Infrastructure as Code (IaC) manages and provisions computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. 4. Application programming interfaces (APIs). With respect to APIs, a host of approaches—including Simple Object Access Protocol (SOAP), Representational State Transfer (REST), and JavaScript Object Notation (JSON)—are available, and many enterprises find themselves using all of them. 5. Internet of Things (IoT). APIs are used in the IoT so that devices can speak to each other without users even knowing they are there. APIs are used to control and monitor things we use every day, including fitness bands, home thermostats, lighting, and automobiles. 6. Answers can include the following: Function event data injection: Triggered through untrusted input such as through a web API call Broken authentication: Coding issues ripe for exploit and attacks, which lead to unauthorized authentication Insecure serverless deployment configuration: Human error in setup Over-privileged function permissions and roles: Failure to implement the least privilege concept

7.

Terms

Preactiv ation state

Definitions

Key has been generated but has not been authorized for use

Susp ende d state

Temporarily inactive

Deac tivat ed state

Keys are not used to apply cryptographic protection, but in some cases, they may be used to process cryptographically protected information

Activ e state

Key may be used to cryptographically protect information

Com pro mise d state

Discovered by an unauthorized entity

8. pre-operational. In the pre-operational phase, the keying material is not yet available for normal cryptographic operations. Keys may not yet be generated or are in the preactivation state. System or enterprise attributes are established during this phase as well. 9. Possible answers are as follows: Data breaches: Although cloud providers may include safeguards in service-level agreements (SLAs), ultimately the organization is responsible for protecting its own data, regardless of where it is located. When this data is not in your hands—and you may not even know where it is physically located at any point in time—protecting your data is difficult. Authentication system failures: These failures allow malicious individuals into the cloud. This issue sometimes is made worse by the organization itself when developers embed credentials and

cryptographic keys in source code and leave them in public-facing repositories. Weak interfaces and APIs: Interfaces and application programming interfaces (APIs) tend to be the most exposed parts of a system because they’re usually accessible from the open Internet.

10. Big data. Big data is a term for sets of data so large or complex that they cannot be analyzed by using traditional data processing applications. Specialized applications have been designed to help organizations with their big data. The big data challenges that may be encountered include data analysis, data capture, data search, data sharing, data storage, and data privacy.

CHAPTER 7 Do I Know This Already? 1. B. To address XML-based attacks, consider eXtensible Access Control Markup Language (XACML), which is a standard for an access control policy language using XML. Its goal is to create an attribute-based access control (ABAC) system that decouples the access decision from the application or the local machine. 2. A. A SQL injection attack inserts, or “injects,” a SQL query as the input data from the client to the application. This type of attack can result in reading sensitive data from the database, modifying database data, executing administrative operations on the database, recovering the content of a given file, and even issuing commands to the operating system. 3. C. A null-pointer dereference takes place when a pointer with a value of NULL is used as though it pointed to a valid memory area. If an attacker can intentionally trigger a nullpointer dereference, the attacker might be able to use the resulting exception to bypass security logic or to cause the application to reveal debugging information.

4. A. A race condition is an attack in which the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome. A type of race condition is time-of-check/timeof-use. In this attack, a system is changed between a condition check and the display of the check’s results. Review Questions 1. policy enforcement point (PEP). When the PEP receives a request from a subject, it creates an XACML request based on the attributes of the subject, the requested action, the resource, and other information. 2.

Terms

Definitions

X A C M L

A standard for an access control policy language using XML

P D P

Retrieves all applicable polices in XACML and compares the request with the policies

S Q L in je ct io n

Type of attack that can result in reading sensitive data from the database, modifying database data, and executing administrative operations on the database

O

When an area of memory of some sort is full and can hold

v er fl o w

no more information

3. Possible answers include the following: Attributes of the user requesting access (for example, all division managers in London) The protocol over which the request is made (for example, HTTPS) The authentication mechanism (for example, requester must be authenticated with a certificate)

By leveraging XACML, developers can remove authorization logic from an application and centrally manage access using policies that can be managed or modified based on business need without making any additional changes to the applications themselves. 4. Integer overflow. Integer overflow occurs when math operations try to create a numeric value that is too large for the available space. The register width of a processor determines the range of values that can be represented. 5.

Terms

Definitions

Heap

An area of memory that can be increased or decreased in size

Directory traversal

One of the ways malicious individuals are able to access parts of a directory to which they should not have access

Password spraying

Technique used to identify the passwords of domain users

Dynamic ARP Inspection (DAI)

Feature that can prevent man-in-the-middle attacks

6. Possible answers include the following: Guessing the session ID: This involves gathering samples of session IDs and guessing a valid ID assigned to another user’s session. Using a stolen session ID: Although TLS/SSL connections hide these IDs, many sites do not require an SSL connection using session ID cookies. They also can be stolen through XSS attacks and by gaining physical access to the cookie stored on a user’s computer.

7. Steal a cookie from an authenticated user. Many websites allow and even incorporate user input into a web page to customize the web page. If a web application does not properly validate this input, one of two things could happen: the text may be rendered on the page, or a script may be executed when others visit the web page. 8.

Terms

Definitions

Impr oper error hand ling

Can cause disclosure of information

Dere feren

Can allow an attacker to use the resulting exception to bypass security logic

cing Race cond ition

Attack in which the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome

Defa ult confi gurat ion

Configuration in which settings from the factory have not been changed

9. Possible answers include the following: Report the status of change processing. Document the functional and physical characteristics of each configuration item. Perform information capture and version control. Control changes to the configuration items, and issue versions of configuration items from the software library.

Although it’s really a subset of change management, configuration management specifically focuses on bringing order out of the chaos that can occur when multiple engineers and technicians have administrative access to the computers and devices that make the network function. 10. strcpy. It copies the C string pointed by source into the array pointed by destination, including the terminating null character (and stopping at that point). The issue is that if the destination is not long enough to contain the string we get an overrun.

CHAPTER 8 Do I Know This Already?

1. B. Multitenancy in a cloud does not necessarily prevent residual data of former tenants from being exposed in storage space assigned to new tenants. In fact, that is one of the dangers of multitenancy. 2. B. Geotagging involves marking a video, photo, or other digital media with a GPS location. This feature has received criticism recently because attackers can use it to pinpoint personal information, such as the location of a person’s home. 3. C. While systems in the DMZ typically require no authentication, the resources in the extranet do. 4. C. A bastion host may or may not be a firewall. The term actually refers to the position of any device. If the device is exposed directly to the Internet or to any untrusted network while screening the rest of the network from exposure, it is a bastion host. 5. B. Each request should not be approved as quickly as possible. Each request should be analyzed to ensure it supports all goals and polices. 6. A. A Type 1 hypervisor is virtualization software that is installed on hardware directly, which is why it is commonly called a bare metal hypervisor. A guest operating system runs on another level above the hypervisor. Examples of Type 1 hypervisors are Citrix XenServer, Microsoft Hyper-V, and VMware vSphere. 7. A. A newer approach to virtualization is referred to as container-based virtualization, also called operating system virtualization. Containerization is a technique in which the kernel allows for multiple isolated user space instances. The instances are known as containers, virtual private servers, or virtual environments. 8. C. Characteristic factor authentication is authentication that is provided based on something a person is. This type of

authentication is referred to as a Type III authentication factor. Biometric technology is the technology that allows users to be authenticated based on physiological or behavioral characteristics. 9. B. A cloud security broker, or cloud access security broker (CASB), is a software layer that operates as a gatekeeper between an organization’s on-premises network and the provider’s cloud environment. It can provide many services in this strategic position. 10. B. The ultimate purpose of honeypot systems is to divert attention from more valuable resources and to gather as much information about an attack or attacker as possible. 11. C. According to NIST SP 800-137, information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. 12. B. Hash functions do not prevent data alteration but provide the best method to determine whether data alteration has occurred. 13. C. Any participant that requests a certificate must first go through the registration authority (RA), which verifies the requestor’s identity and registers the requestor. After the identity is verified, the RA passes the request to the certificate authority (CA). 14. A. Hunt teams work together to detect, identify, and understand advanced and determined threat actors. A hunt team is a costly investment on the part of an organization. Review Questions 1. Asset tagging. Asset tagging can also be a part of a more robust asset tracking system when implemented in such a

way that the device can be tracked and located at any point in time. 2. Possible answers include DMZ, extranet, VLANs, and subnets. One of the best ways to protect sensitive resources is to utilize network segmentation. When you segment a network, you create security zones that are separated from one another by devices such as firewalls and routers that can be used to control the flow of traffic between the zones. 3.

Terms

Definitions

Ju mp box

A server that is used to access devices that have been placed in a secure network zone such as a DMZ

Sys te m isol ati on

Systems isolated from other systems through the control of communications with the device

Air gap

Device with no network connections and all access to the system must be done manually by adding and removing updates and patches with a flash drive or other external device

Bas tio n hos t

Device exposed directly to the Internet or to any untrusted network

Du al-

Firewall with two network interfaces: one pointing to the internal network and another connected to the untrusted

ho me d fire wal l

network

4. Screened subnet. In a screened subnet, two firewalls are used, creating a subnet between them that is screened both from the internal network and the Internet. 5. Answers can include any of the three planes: The control plane carries signaling traffic originating from or destined for a router. This is the information that allows routers to share information and build routing tables. The data plane, also known as the forwarding plane, carries user traffic. The management plane administers the router.

6.

Terms

Definitions

V S A N

Software-defined storage method that allows pooling of storage capabilities and instant and automatic provisioning of virtual machine storage

V P C

Cloud model in which a public cloud provider isolates a specific portion of its public cloud infrastructure to be provisioned for private use

V L A N

Logical segmentation on a switch at Layers 2 and 3

V P N

Allows external devices to access an internal network by creating a tunnel over the Internet

7. Internet Security Association and Key Management Protocol (ISAKMP). ISAKMP creates a security association (SA) for each connection, enabling multiple IPsec connections at a time. 8. Possible answers are Data is encrypted. SSL/TLS is supported on all browsers. Users can easily identify its use (via https://).

SSL/TLS is often used to protect other protocols. Secure Copy Protocol (SCP), for example, uses SSL/TLS to secure file transfers between hosts. 9.

Terms

Definitions

Type 1 hypervis or

Hypervisor installed on bare metal

Containe rization

Virtualization method that does not use a hypervisor

VDI

Hosting desktop operating systems within a virtual environment in a centralized server

Type 2 hypervis

Hypervisor installed over an operating system

or

10. Ownership factors. Ownership factor authentication is authentication that is provided based on something that a person has. This type of authentication is referred to as a Type II authentication factor.

CHAPTER 9 Do I Know This Already? 1. C. Most mobile device management (MDM) software can create an encrypted “container” to hold and quarantine corporate data separately from that of the users’ data. This allows for MDM policies to be applied only to that container and not to the rest of the device. 2. B. The software development life cycle steps are as follows: Step 1. Plan/initiate project Step 2. Gather requirements Step 3. Design Step 4. Develop Step 5. Test/validate Step 6. Release/maintain Step 7. Certify/accredit Step 8. Change management and configuration management/replacement 3. B. Traditionally, three main actors in the software development process—development (Dev), quality assurance (QA), and operations (Ops)—performed their functions separately, or operated in “silos.” In DevOps they work together on all steps of the process.

4. B. Regression testing is done to verify functionality after making a change to the software. Security regression testing is a subset of regression testing that validates that changes have not reduced the security of the application or opened new weaknesses. 5. C. Encoding is the process of changing data into another form using code. When this process is applied to output, it is done to prevent the inclusion of dangerous character types that might be inserted by malicious individuals. 6. B. Because a static state means the software is not running it is a type of code review. 7. B. Synthetic transaction monitoring, which is a type of proactive monitoring, is often preferred for websites and applications. It provides insight into the application’s availability and performance, warning of any potential issue before users experience any degradation in application behavior. 8. B. Formal methods can be used at a number of levels: Level 0: Formal specification may be undertaken and then a program developed from this informally. The least formal method. This is the least expensive to undertake. Level 1: Formal development and formal verification may be used to produce a program in a more formal manner. For example, proofs of properties or refinement from the specification to a program may be undertaken. This may be most appropriate in highintegrity systems involving safety or security. Level 2: Theorem provers may be used to undertake fully formal machine-checked proofs. This can be very expensive and is only practically worthwhile if the cost of mistakes is extremely high (e.g., in critical parts of microprocessor design).

9. D. Representational State Transfer (REST) is a client/server model for interacting with content on remote systems, typically using HTTP. It involves accessing and modifying

existing content and also adding content to a system in a particular way Review Questions 1. Corporate-owned, personally enabled (COPE). COPE is a strategy in which an organization purchases mobile devices and users manage those devices. By using a COPE strategy, organizations can often monitor and control the users’ activity to a larger degree than with personally owned devices. 2. Possible answers include the following: Application vetting process Application intake process Application testing process Application approval/rejection process Results submission process App Re-Vetting process

To help ensure that an app conforms to such requirements, a process for evaluating the security of apps should be performed. 3.

Terms

Definitions

Maint enanc e hooks

A set of instructions built into the code that allows someone who knows about the so-called backdoor to use the instructions to connect to view and edit the code without using the normal access controls

Timeofcheck

Attack that attempts to take advantage of the sequence of events that occurs as the system completes common tasks

/time -ofuse attack s Cross -site reque st forger y (CSR F)

Attack that causes an end user to execute unwanted actions on a web application in which he is currently authenticated

Clickjackin g

Attack that crafts a transparent page or frame over a legitimate-looking page that entices the user to click something

4. Representational State Transfer (REST). REST involves accessing and modifying existing content and also adding content to a system. REST does not require a specific message format during HTTP resource exchanges. 5. Possible answers include the following: Size: REST/JSON is a lot smaller and less bloated than SOAP/XML. Therefore, much less data is passed over the network, which is particularly important for mobile devices. Efficiency: REST/JSON makes it easier to parse data, thereby making it easier to extract and convert the data. As a result, it requires much less from the client’s CPU. Caching: REST/JSON provides improved response times and server loading due to support from caching. Implementation: REST/JSON interfaces are much easier than SOAP/XML to design and implement.

6.

Terms

Definitions

E m b e d d e d sy st e m

A computer system with a dedicated function within a larger system

S o C

An integrated circuit that includes all components of a computer or another electronic system

S D L C

Provides a predictable framework of procedures designed to identify all requirements with regard to functionality, cost, reliability, and delivery schedule and ensure that each is met in the final solution

D e v S ec O p s

Development concept, emphasizing security, that grew out of the DevOps approach

7. Stress testing. Stress testing determines the workload that the application can withstand. These tests should always have defined objectives before testing begins.

8. Possible answers include the following: Formal review: This is an extremely thorough, line-by-line inspection, usually performed by multiple participants using multiple phases. This is the most time-consuming type of code review but the most effective at finding defects. Lightweight review: This type of code review is much more cursory than a formal review. It is usually done as a normal part of the development process. It can happen in several forms: Pair programming: Two coders work side by side, checking one another’s work as they go. E-mail review: Code is e-mailed around to colleagues for them to review when time permits. Over the shoulder: Coworkers review the code while the author explains his or her reasoning. Tool-assisted: Using automated testing tools is perhaps the most efficient method.

9.

Terms

Definitions

Regression testing

Testing the security after a change is made to the software

Gray-box testing

Also called translucent testing, as the tester has partial knowledge

White-box testing

Internal workings of the application are fully known

Black-box testing

Internal workings of the application are not known

10. URL encoding. Best known is the UTF-8 character encoding standard, which is a variable-length encoding (1, 2, 3, or 4 units of 8 bits, hence the name UTF-8).

CHAPTER 10 Do I Know This Already? 1. A. NIST SP 800-164 is a draft Special Publication that gives guidelines on hardware-rooted security in mobile devices. It defines three required security components for mobile devices: Roots of Trust (RoTs), an application programming interface (API) to expose the RoTs to the platform, and a Policy Enforcement Engine (PEnE). 2. B. An eFuse allows for the dynamic real-time reprogramming of computer chips. Utilizing a set of eFuses, a chip manufacturer can allow for the circuits on a chip to change while it is in operation. 3. C. The traditional BIOS has been replaced with the Unified Extensible Firmware Interface (UEFI). UEFI maintains support for legacy BIOS devices but is considered a more advanced interface than the traditional BIOS. 4. D. The Trusted Foundry program can help you exercise care in ensuring the authenticity and integrity of the components of hardware purchased from a vendor. This DoD program identifies “trusted vendors” and ensures a “trusted supply chain.” 5. A. A secure enclave is a part of an operating system that cannot be compromised even when the operating system kernel is compromised, because the enclave has its own CPU and is separated from the rest of the system. 6. B. Anti-tamper technology is designed to prevent access to sensitive information and encryption keys on a device. Antitamper processors, for example, store and process private or sensitive information, such as private keys or electronic

money credit. The chips are designed so that the information is not accessible through external means and can be accessed only by the embedded software, which should contain the appropriate security measures, such as required authentication credentials. 7. A. Self-encrypting drives do exactly as the name implies: they encrypt themselves without any user intervention. It is so transparent to the user that the user may not even be aware the encryption is occurring. It uses a unique and random data encryption key (DEK). 8. A. Obtain firmware updates only from the vendor directly. Never use a third-party facilitator for this. Also make sure you verify the hash value that comes along with the update to ensure that it has not been altered since its creation. 9. D. Measured Boot, also known as Secure Boot, is a term that applies to several technologies that follow the Secure Boot standard. Its implementations include Windows Secure Boot, measured launch, and Integrity Measurement Architecture (IMA). 10. B. Bus encryption is necessary not only to prevent tampering of encrypted instructions that may be easily discovered on a data bus or during data transmission, but also to prevent discovery of decrypted instructions that may reveal security weaknesses that an intruder can exploit. Review Questions 1. API. This provides application developers a set of security services and capabilities they can use to secure their applications and protect the data they process. 2. Possible answers include the following: Endorsement key (EK): The EK is persistent memory installed by the manufacturer that contains a public/private key pair.

Storage root key (SRK): The SRK is persistent memory that secures the keys stored in the TPM. Attestation identity key (AIK): The AIK is versatile memory that ensures the integrity of the EK. Platform configuration register (PCR) hash: A PCR hash is versatile memory that stores data hashes for the sealing function. Storage keys: A storage key is versatile memory that contains the keys used to encrypt the computer’s storage, including hard drives, USB flash drives, and so on.

A TPM chip consists of both static memory and versatile memory that is used to retain the important information when the computer is turned off. 3.

Terms

Definitions

Vir tua l TP M

A software object that performs the functions of a TPM chip

HS M

An appliance that safeguards and manages digital keys used with strong authentication and provides crypto processing

eF use

Allows for the dynamic real-time reprogramming of computer chips

UE FI

A more advanced interface than traditional BIOS

4. Secure Boot. Secure Boot requires that all boot loader components are found on the trusted list.

5. Intel Software Guard Extensions (SGX). It defines private regions of memory, called enclaves, whose contents are protected and unable to be either read or saved by any process outside the enclave itself, including processes running at higher privilege levels. 6.

Terms

Definitions

Firm ware

Any type of instructions stored in non-volatile memory devices such as read-only memory (ROM)

Atom ic execu tion

Using synchronization mechanisms to make sure that the operation is seen, from any other thread, as a single operation

Meas ured Boot

Process where the firmware verifies all UEFI executable files and the OS loader to be sure they are trusted

Bus encry ption

Used by newer Microsoft operating systems to protect certificates, BIOS, passwords, and program authenticity

7. Integrity Measurement Architecture (IMA). Anchoring the list to the TPM chip in hardware prevents its compromise. 8. It prevents installing any other operating systems or running any live Linux media. 9.

Terms

Definitions

NX bit

Method for specifying areas of memory that cannot be used for execution

Rando m data encrypt ion key (DEK)

Used to encrypt self-encrypting drives

XN bit

Technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or storage of data

Trusted Executi on (TE)

A collection of features that is used to verify the integrity of the system and implement security policies, which together can be used to enhance the trust level of the complete system

10. Unified Extensible Firmware Interface (UEFI). UEFI maintains support for legacy BIOS devices, but is considered a more advanced interface than traditional BIOS.

CHAPTER 11 Do I Know This Already? 1. A. Heuristics is often utilized by antivirus software to identify threats that signature analysis can’t discover because the threats either are too new to have been analyzed (called zero-day threats) or are multipronged attacks that are constructed in such a way that existing signatures do not identify them. 2. B. The identification of threats based on behavior that typically accompanies such threats is a characteristic of heuristics, not trend analysis. 3. C. According to NIST SP 800-128, endpoints (for example, laptops, desktops, mobile devices) are a fundamental part of

any organizational system. Endpoints are an important source of connecting end users to networks and systems, and are also a major source of vulnerabilities and a frequent target of attackers looking to penetrate a network. 4. D. urlQuery is a free online service for testing and analyzing URLs, helping with identification of malicious content on websites. 5. A. Syslog provides a simple framework for log entry generation, storage, and transfer that any OS, security software, or application could use if designed to do so. 6. B. The purpose of determining the impact is to Identify what systems were impacted Determine what role the quality of the response played in the severity of the issue For the future, associate the attack type with the systems that were impacted

7. C. In a transitive or tracking rule, the target in the first event (N malware infection) becomes the source in the second event (malware infection of another machine). This is typically used in worm/malware outbreak scenarios. 8. D. String searches are used to look within a log file or data stream and locate any instances of that string. A string can be any combination of letters, numbers, and other characters. 9. A. DomainKeys Identified Mail (DKIM) enables you to verify the source of an e-mail. DKIM provides a method for validating a domain name identity that is associated with a message through cryptographic authentication. Review Questions

1. Mobile code. Organizations should exercise caution in allowing the use of mobile code such as ActiveX, Java, and JavaScript. An attacker can easily attach a script to a URL in a web page or e-mail that, when clicked, executes malicious code within the computer’s browser. 2. Answers can include the following: Boot sector: This type of virus infects the boot sector of a computer and either overwrites files or installs code into the sector so that the virus initiates at startup. Parasitic: This type of virus attaches itself to a file, usually an executable file, and then delivers the payload when the program is used. Stealth: This type of virus hides the modifications that it is making to the system to help avoid detection. Polymorphic: This type of virus makes copies of itself, and then makes changes to those copies. It does this in hopes of avoiding detection from antivirus software. Macro: This type of virus infects programs written in Word, Basic, Visual Basic, or VBScript that are used to automate functions. Macro viruses infect Microsoft Office files and are easy to create because the underlying language is simple and intuitive to apply. They are especially dangerous in that they infect the operating system itself. They also can be transported between different operating systems because the languages are platform independent. Multipartite: Originally, these viruses could infect both program files and boot sectors. This term now means that the virus can infect more than one type of object or can infect in more than one way. File or system infector: File infectors infect program files, and system infectors infect system program files. Companion: This type of virus does not physically touch the target file. It is also referred to as a spawn virus. E-mail: This type of virus specifically uses an e-mail system to spread itself because it is aware of the e-mail system functions. Knowledge of the functions allows this type of virus to take advantage of all e-mail system capabilities.

Script: This type of virus is a stand-alone file that can be executed by an interpreter.

3.

Terms

Definitions

Rootk it

A set of tools that a hacker can use on a computer after he has managed to gain access and elevate his privileges to administrator

Ranso mwar e

Prevents or limits users from accessing their systems until they pay money

Rever se engin eering

Taking something apart to discover how it works and perhaps to replicate it

Sandb ox

Place where it is safe to probe and analyze malware

4. Secured memory. Based on the nature of data in a partition, the partition can be designated as a security-sensitive or a non-security-sensitive partition. In a security breach (such as tamper detection), the contents of a security-sensitive partition can be erased by the controller itself, while the contents of the non-security-sensitive partitions can remain unchanged. 5. Possible answers include the following: Phishing: A social engineering attack in which attackers try to learn personal information, including credit card information and financial data. This type of attack is usually carried out by

implementing a fake website that very closely resembles a legitimate website. Users enter data, including credentials, on the fake website, allowing the attackers to capture any information entered. Spear phishing: A phishing attack carried out against a specific target by learning about the target’s habits and likes. Spear phishing attacks take longer to carry out than phishing attacks because of the information that must be gathered. Pharming: Similar to phishing, but pharming actually pollutes the contents of a computer’s DNS cache so that requests to a legitimate site are actually routed to an alternate site. Shoulder surfing: Occurs when an attacker watches a user enter login or other confidential data. Encourage users to always be aware of who is observing their actions. Implementing privacy screens helps ensure that data entry cannot be recorded. Identity theft: Occurs when someone obtains personal information, including driver’s license number, bank account number, and Social Security number, and uses that information to assume an identity of the individual whose information was stolen. After the identity is assumed, the attack can go in any direction. In most cases, attackers open financial accounts in the user’s name. Attackers also can gain access to the user’s valid accounts. Dumpster diving: Occurs when attackers examine garbage contents to obtain confidential information. This includes personnel information, account login information, network diagrams, and organizational financial data. Organizations should implement policies for shredding documents that contain this information.

6.

Terms

Ema natio ns

Definitions

Electromagnetic signals that are emitted by an electronic device

Buffe r overf low

Occurs when the amount of data that is submitted to an application is larger than the buffer can handle

Mobi le code

Software that is transmitted across a network to be executed on a local system

Back door/ trapd oor

A mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device

7. NetFlow. The traffic information is exported using UDP packets to a NetFlow analyzer, which can then organize the information in useful ways. 8. Answers can include Facility: The source of the message. The source can be the operating system, the process, or an application. Severity: Rated using a numeric scale. Source: The log from which this entry came. Action: The action taken on the packet. Source: The source IP address and port number. Destination: The destination IP address and port number.

Syslog messages all follow the same format because they have, for the most part, been standardized. 9.

Terms

Definitions

I P S

System that can take an action when a security event occurs

W A F

System that examines all web input before processing and applies rule sets to an HTTP conversation

P r o x y

A server, application, or appliance that acts as an intermediary for requests from clients seeking resources from servers

I D S

System that can alert when a security event occurs

10. DomainKeys Identified Mail (DKIM). DKIM provides a method for validating a domain name identity that is associated with a message through cryptographic authentication.

CHAPTER 12 Do I Know This Already? 1. C. Rights allow administrators to assign specific privileges and logon rights to groups or users. Rights manage who is allowed to perform certain operations on an entire computer or within a domain, rather than a particular object within a computer. 2. A. Whitelisting is the process of identifying what values are acceptable (IP addresses, e-mail addresses, MAC addresses, web URLs, file types) while excluding all others. 3. C. A blacklist constitutes the file types that are denied, so you must constantly update this with new malicious file

types. 4. A. NGFWs are application aware, which means they can distinguish between specific applications instead of allowing all traffic coming in via typical web ports. Moreover, they examine packets only once, during the deep packet inspection phase (which is required to detect malware and anomalies). 5. A. A rule-based IPS is an expert system that uses a knowledge base, an inference engine, and rule-based programming. The knowledge is configured as rules. 6. B. Data loss prevention software uses ingress and egress filters to identify sensitive data that is leaving the organization and can prevent such leakage. 7. C. Endpoint detection and response is a proactive endpoint security approach designed to supplement existing defenses. 8. A. The goal of network access control is to examine all devices requesting network access for malware, missing security updates, and any other security issues the devices could potentially introduce to the network. 9. A. A sinkhole is a router designed to accept and analyze attack traffic. Sinkholes can be used to do the following: Draw traffic away from a target Monitor worm traffic Monitor other malicious traffic

10. B. Network security devices such as SIEM, IPS, IDS, and firewall systems must be able to recognize the malware when it is still contained in network packets before it reaches devices. This requires identifying a malware signature. 11. A. By using sandboxing tools, you can execute malware executable files without allowing the files to interact with the

local system. 12. B. Port security applies to ports on a switch, and because it relies on monitoring the MAC addresses of the devices attached to the switch ports, it is considered to be Layer 2 security. Review Questions 1. right. Rights allow administrators to assign specific privileges and logon rights to groups or users. Rights manage who is allowed to perform certain operations on an entire computer or within a domain, rather than on a particular object within a computer. 2. Possible answers include Cannot prevent: IP spoofing Attacks that are specific to an application Attacks that depend on packet fragmentation Attacks that take advantage of the TCP handshake

3.

Terms

Definitions

Scr ee ne d su bn et

Architecture where two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network

NG F

A category of devices that attempt to address traffic inspection and application awareness shortcomings of a

W

traditional stateful firewall, without hampering the performance

Ho stbas ed fire wa ll

Resides on a single host and is designed to protect that host only

ipt abl es

Linux host-based firewall

4. Possible answers include the following: Secure addresses from exposure Support a multiprotocol environment Allow for comprehensive logging

5. Network data loss prevention (DLP). There are two locations where you can implement DLP: Network DLP: Installed at network egress points near the perimeter, network DLP analyzes network traffic. Endpoint DLP: Endpoint DLP runs on end-user workstations or servers in the organization.

6.

Terms

802.1X

Definitions

Defines a framework for centralized port-based authentication

Network Access Protectio n (NAP)

Microsoft’s name for NAC services

Agentbased

NAC that can perform deep inspection and remediation at the expense of additional software on the endpoint

Location -based

Type of rule where a user might have one set of access rights when connected from another office and another set when connected from the Internet

7. Possible answers include the following: Encrypts only the password in the access request packet Does not support any of the following: Apple Remote Access protocol NetBIOS Frame Protocol Control protocol X.25 PAD connections Does not support securing the available commands on routers and switches

While RADIUS and TACACS+ perform the same roles, they have different characteristics. These differences must be taken into consideration when choosing a method. Keep in mind also that while RADIUS is a standard, TACACS+ is Cisco proprietary. 8. Sheep dip system. Another sandboxing option for studying malware is to set up a sheep dip computer. 9.

Terms

Definitions

Imaging tools

Used to take images for forensics and prosecution procedures

Registry/conf iguration tools

Used to help identify infected settings in the registry and to identify the last-saved settings

File/data analysis tools

Used to perform static analysis of potential malware files

Packet capture tools

Used to understand how the malware uses the network

10. Possible answers include the following: Install port monitors to discover ports used by the malware. Install file monitors to discover what changes may be made to files. Install network monitors to identify what communications the malware may attempt. Install one or more antivirus programs to perform malware analysis.

CHAPTER 13 Do I Know This Already? 1. A. The steps are as follows: 1. Ask a question. 2. Establish a hypothesis. 3. Conduct an experiment. 4. Analyze the results. 5. Make a conclusion.

2. A. The FBI has not singled out hacktivists as a major group and would probably include them in the category of terrorists since they seek to damage or deface in the name of a cause. 3. A. When the processor is very busy with very little or nothing running to generate the activity, it could be a sign that the processor is working on behalf of malicious software. Executable process analysis allows you to determine this. This is one of the key reasons any compromise is typically accompanied by a drop in performance. 4. A. The configuration lockdown setting helps support change control. 5. B. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. 6. B. Attack Vector (AV) describes how the attacker would exploit the vulnerability and has three possible values: L: Stands for Local and means that the attacker must have physical or logical access to the affected system. A: Stands for Adjacent network and means that the attacker must be on the local network. N: Stands for Network and means that the attacker can cause the vulnerability from any network. P: Stands for Physical and requires the attacker to physically touch or manipulate the vulnerable component.

7. C. The Integrated Intelligence Center (IIC) is a unit at the Center for Internet Security (CIS) that focuses on merging cybersecurity and physical security to aid governments in dealing with emerging threats. IIC attempts to create

predictive models using the multiple data sources at its disposal. 8. C. Implementation results are analyzed to determine if the implementation made a difference in Step 3, Check. Deming’s Plan–Do–Check–Act cycle steps are as follows: 1. Plan: Identify an area for improvement and make a formal plan to implement it. 2. Do: Implement the plan on a small scale. 3. Check: Analyze the results of the implementation to determine whether it made a difference. 4. Act: If the implementation made a positive change, implement it on a wider scale. Continuously analyze the results.

Review Questions 1.

Step

Ask a question State a hypothesis Conduct an experiment Analyze the results Make a conclusion

Applying the scientific method to proactive threat hunting, making an educated guess about the aims and nature of an attack is the first step. Then you conduct experiments (or gather more network data) to either prove or disprove the

hypothesis. Then the process starts again with a new hypothesis if the old one has been disproved. 2. Possible answers include the following: Threat Modeling Tool (formerly SDL Threat Modeling Tool) identifies threats based on the STRIDE threat classification scheme. ThreatModeler identifies threats based on a customizable comprehensive threat library and is intended for collaborative use across all organizational stakeholders. IriusRisk offers both community and commercial versions of a tool that focuses on the creation and maintenance of a live threat model through the entire SDLC. It connects with several different tools to empower automation. securiCAD focuses on threat modeling of IT infrastructures using a computer-based design (CAD) approach where assets are automatically or manually placed on a drawing pane. SD Elements is a software security requirements management platform that includes automated threat modeling capabilities.

3. Executable process analysis. When the processor is very busy with very little or nothing running to generate the activity, it could be a sign that the processor is working on behalf of malicious software. Executable process analysis allows you to determine this. 4.

Terms

Definitions

System hardenin g

Ensures that all systems have been secured to the fullest extent possible and still provide functionality

Configura tion lockdown

Prevents any changes to the configuration, even by users who formerly had the right to configure the device

Data classificat ion policy

Critical to all systems to protect the confidentiality, integrity, and availability (CIA) of data

Sensitivit y

A measure of how freely data can be handled

Criticality

A measure of the importance of the data

5. The military/government data classification levels in order are as follows: 1. Top secret: Data that is top secret includes weapon blueprints, technology specifications, spy satellite information, and other military information that could gravely damage national security if disclosed. 2. Secret: Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed. 3. Confidential: Data that is confidential includes patents, trade secrets, and other information that could seriously affect the government if unauthorized disclosure occurred. 4. Sensitive but unclassified: Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security but could cause citizens to question the reputation of the government. 5. Unclassified: Military and government information that does not fall into any of the other four categories is considered unclassified and usually has to be granted to the public based on the Freedom of Information Act.

6. attack vector. Each attack vector can be thought of as a source of malicious content or a potentially vulnerable processor of that malicious content. 7.

Terms

Definitions

Intel ligen ce inte grati on

The consideration and analysis of intelligence data from a perspective that combines multiple data sources and attempts to make inferences based on this data integration

CMa aS

Solution deployed by cloud service providers for improvement

Hun t tea min g

New approach to security that is offensive in nature rather than defensive, which has been common for security teams in the past

Stat e spon sor

Foreign government interested in pilfering data, including intellectual property

8. Possible answers include the following: Remove unnecessary applications. Disable unnecessary services. Block unrequired ports. Tightly control the connecting of external storage devices and media, if allowed at all.

Another of the ongoing goals of operations security is to ensure that all systems have been hardened to the extent possible and still provide functionality.

9. value, sensitivity. Data should be classified based on its value to the organization and its sensitivity to disclosure. Assigning a value to data allows an organization to determine the resources that should be used to protect the data. 10.

Terms

Definitions

Pr oce ss Ex plo rer

Enables you to look at the graph that appears in Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone

Hy pot hes is

A proposed explanation of something

Bla ck hat

Actor with malicious intent

Th rea t mo del

A conceptual design that attempts to provide a framework on which to implement security efforts

CHAPTER 14 Do I Know This Already?

1. A. Workflow orchestration can be used in the security world. Examples include Dynamic incident response plans that adapt in real time Automated workflows to empower analysts and enable faster response

2. B. Common scripting languages include the following: bash: Used to work in the Linux interface Node js: Framework to write network applications using JavaScript Ruby: Great for web development Python: Supports procedure-oriented programming and objectoriented programming Perl: Found on all Linux servers, helps in text manipulation tasks

3. C. An API is a set of clearly defined methods of communication between various software components. As such, you should think of an API as a connection point that requires security consideration; for example, between your e-commerce site and a payment gateway. 4. C. Automated malware signature creation is an additional method of identifying malware. The antivirus software monitors incoming unknown files for the presence of malware and analyzes each file based on both classifiers of file behavior and classifiers of file content. 5. D. Data enrichment is a technique that allows one process to gather information from another process or source and then customize a response to a third process using the data from the second process or source. 6. A. Although threat feeds can tell you about malware out in the wild, it can’t tell you whether you are currently infected.

7. B. Automatic exploit generation (AEG) is the “first end-toend system for fully automatic exploit generation,” according to the Carnegie Mellon Institute’s own description of its AI named Mayhem. Developed for off-the-shelf as well as the enterprise software being increasingly used in smart devices and appliances, AEG can find a bug and determine whether it is exploitable. 8. C. The Security Content Automation Protocol (SCAP) standardizes the nomenclature and formats used. A vendor of a security automation product can obtain a validation against SCAP, demonstrating that its product will interoperate with other scanners and express the scan results in a standardized way. 9. B. The idea behind continuous integration is to identify bugs as early as possible in the development process. 10. C. Continuous deployment/delivery takes continuous integration one step further, with every change that passes all stages of your production pipeline being released to your customers. This helps to improve the feedback loop. Review Questions 1. Orchestration. Over time, orchestration has been increasingly used to automate processes that were formerly carried out manually by humans. 2. Possible answers include the following: Dynamic incident response plans that adapt in real time Automated workflows to empower analysts and enable faster response

Orchestration is the sequencing of events based on certain parameters by using scripting and scripting tools. 3.

Terms

Definitions

Ru by

Great for web development

Perl

Framework to write network applications using JavaScript

Pyt hon

Supports procedure-oriented programming and objectoriented programming

bas h

Used to work in the Linux interface

4. Windows PowerShell 5. Possible answers include the following: Common Configuration Enumeration (CCE): These are configuration best practice statements maintained by the National Institute of Standards and Technology (NIST). Common Platform Enumeration (CPE): These are methods for describing and classifying operating systems, applications, and hardware devices. Common Weakness Enumeration (CWE): These are design flaws in the development of software that can lead to vulnerabilities. Common Vulnerabilities and Exposures (CVE): These are vulnerabilities in published operating systems and applications software.

6. scripting. Examples of scripting tools are Puppet, Chef, and Ansible. 7. Possible answers include the following: Suspicious domains

Lists of known malware hashes IP addresses associated with malicious activity

Threat intelligence feeds are constantly updating streams of indicators or artifacts derived from a source outside the organization. 8.

Terms

Definitions

C C E

Configuration best practice statements maintained by NIST

C V E

Vulnerabilities in published operating systems and applications software

C W E

Design flaws in the development of software that can lead to vulnerabilities

C P E

Methods for describing and classifying operating systems, applications, and hardware devices

9. Continuous integration. The idea behind this is to identify bugs as early as possible in the development process. 10. Possible answers include the following: Combine: Gathers threat intelligence feeds from publicly available sources Palo Alto Networks AutoFocus: Provides intelligence, correlation, added context, and automated prevention workflows

Anomali ThreatStream: Helps deduplicate data, removes false positives, and feeds intelligence to security tools ThreatQuotient: Helps accelerate security operations with an integrated threat library and shared contextual intelligence ThreatConnect: Combines external threat data from trusted sources with in-house data

Using SIEM (or other aggregation tools) to aggregate threat feeds can also be beneficial. 11.

Terms

Definitions

Threat feed

Constantly updating streams of indicators or artifacts derived from a source outside the organization

Data enrichm ent

Technique that allows one process to gather information from another process or source and then customize a response using the data from the second process or source

Automat ed malware signatur e creation

Additional method of identifying malware

API

Set of clearly defined methods of communication between various software components

12. Apps. In the VMware world, technicians can create what are called apps. Apps are groups of virtual machines (VMs) that

are managed and orchestrated as a unit to provide a service to users.

CHAPTER 15 Do I Know This Already? 1. B. The content of these communications should be limited to what is necessary for each stakeholder to perform his or her role. 2. A. In the healthcare field, the HIPAA Breach Notification Rule, 45 CFR §§ 164.400-414, requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured protected health information (PHI). 3. B. The role of the legal department is to perform the following: Review nondisclosure agreements (NDAs) to ensure support for incident response efforts. Develop wording of documents used to contact possibly affected sites and organizations. Assess site liability for illegal computer activity.

4. D. Public relations (PR) roles include the following: Handling all press conferences that may be held Developing all written responses to the outside world concerning an incident and its response

5. C. As part of the security measures that organizations must take to protect privacy, personally identifiable information (PII) must be understood, identified, and protected. PII is any piece of data that can be used alone or with other information to identify a single person.

6. D. Contracts are not considered intellectual property because they are not unique creations of the mind. Review Questions 1. public relations. All information released to the public and the press should be handled by public relations or persons trained for this type of communication. 2. Possible answers include the following: Develop job descriptions for those persons who will be hired for positions involved in incident response. Create policies and procedures that support the removal of employees found to be engaging in improper or illegal activity.

HR should ensure that these activities are spelled out in policies and new hire documents as activities that are punishable by firing. 3.

Terms

Definitions

HIPAA Breach Notification Rule

Requires covered entities and their business associates to provide notification following a loss of unsecured protected health information (PHI)

USA PATRIOT Act

Enhanced the investigatory tools available to law enforcement

Payment Card Industry Data Security Standard (PCI DSS)

Affects any organizations that handle cardholder information for the major credit card companies

KennedyKassebaum Act

Also known as HIPAA

4. human resources (HR). The role of the HR department involves the following responsibilities in incident response: Develop job descriptions for those persons who will be hired for positions involved in incident response. Create policies and procedures that support the removal of employees found to be engaging in improper or illegal activity.

5. Possible answers include the following: Communicate the importance of the incident response plan to all parts of the organization. Create agreements that detail the authority of the incident response team to take over business systems if necessary. Create decision systems for determining when key systems must be removed from the network.

The most important factor in the success of an incident response plan is the support, both verbal and financial (through the budget process), of upper management. 6.

Terms

Definitions

Personally identifiable information

Any piece of data that can be used alone or with other information to identify a single person

Criticality

Measure of the importance of the data

Sensitivity

Measure of how freely data can be handled

Personal health information

Medical records of individuals

7. senior leadership. Moreover, all other levels of management should fall in line with support of all efforts. 8. Possible answers include the following: Will you be able to recover the data in case of disaster? How long will it take to recover the data? What is the effect of this downtime, including loss of public standing?

Data is considered critical when it is essential to the organization’s business. 9.

Terms

Definitions

Pat ent

Granted to an individual or a company to protect an invention

Tra de sec ret

Gives an organization a competitive edge; includes recipes, formulas, ingredient listings, and so on

Tra de ma rk

Identifies a product protected from being used by another organization

Co pyr

Ensures that a work that is authored is protected from any form of reproduction or use without the consent of the holder

igh t

10. corporate confidential data. Corporate confidential data is anything that needs to be kept confidential within the organization.

CHAPTER 16 Do I Know This Already? 1. C. The steps in the incident response process are as follows: 1. Preparation 2. Detection 3. Analysis 4. Containment 5. Eradication and recovery 6. Post-incident activities

2. C. Technical staff should receive technical training on configuring and maintaining security controls, including how to recognize an attack when it occurs. In addition, technical staff should be encouraged to pursue industry certifications and higher education degrees. 3. A. The scope determines the impact and is a function of how widespread the incident is and the potential economic and intangible impacts it could have on the business. 4. C. Mean time to repair (MTTR) is the average time required to repair a single resource or function when a disaster or disruption occurs. 5. B. The segmentation process involves limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments. These

segments could be defined at either Layer 3 or Layer 2 of the OSI reference model. 6. A. You can use port security to isolate a device at Layer 2 without removing it from the network. 7. B. Clearing includes removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools. With this method, the data is recoverable only using special forensic techniques. 8. D. Sanitization refers to removing all traces of a threat by overwriting the drive multiple times to ensure that the threat is removed. This works well for mechanical hard disk drives, but solid-state drives present a challenge in that they cannot be overwritten. 9. A. Indicators of compromise (IoCs) are behaviors and activities that precede or accompany a security incident. 10. C. The first document that should be drafted is a lessons learned report. It briefly lists and discusses what was learned about how and why the incident occurred and how to prevent it from occurring again. Review Questions 1. preparation. Responders should be well prepared and equipped with all the tools they need to provide a robust response. 2. Answer: The steps are as follows: 1. Preparation 2. Detection 3. Analysis 4. Containment 5. Eradication and recovery 6. Post-incident activities

Incident response procedures should be clearly documented. 3.

Terms

Definitions

Maximu m tolerable downtim e (MTD)

The maximum amount of time that an organization can tolerate a single resource or function being down

Mean time to repair (MTTR)

The average time required to repair a single resource or function

Mean time between failures (MTBF)

The estimated amount of time a device will operate before a failure occurs

Recovery time objective (RTO)

The shortest time period after a disaster or disruptive event within which a resource or function must be restored in order to avoid unacceptable consequences

4. Remediation. This step involves eliminating any residual danger or damage to the network that still might exist. For example, in the case of a virus outbreak, it could mean scanning all systems to root out any additional affected machines. These measures are designed to make a more detailed mitigation when time allows. 5. Possible answers include the following:

Value to owner Work required to develop or obtain the asset Costs to maintain the asset Damage that would result if the asset were lost Cost that competitors would pay for the asset Penalties that would result if the asset were lost

The value of an asset should be considered with respect to the asset owner’s view. 6.

Terms

Definitions

Se gm ent ati on

Limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments

Sa niti zat ion

Removing all traces of a threat by overwriting the drive multiple times

Cle ari ng

Removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools

Pu rgi ng

Makes the data unreadable even with advanced forensic techniques

7. call list/escalation list. First responders to an incident should have contact information for all individuals who

might need to be alerted during the investigation. 8. Possible answers include the following: Disassembly Decompiling Debugging

With respect to reverse engineering malware, this process refers to extracting the code from the binary executable to identify how it was programmed and what it does. 9.

Terms

Definitions

Disassem bly

Reading the machine code into memory and then outputting each instruction as a text string

Reverse engineeri ng

Retracing the steps in an incident, as seen from the log

Debuggin g

Steps though the code interactively

Decompil ing

Process that attempts to reconstruct the high-level language source code

10. Indicators of compromise (IoCs). You should always record or generate the IoCs that you find related to the incident. This information may be used to detect the same sort of incident later, before it advances to the point of a breach.

CHAPTER 17

Do I Know This Already? 1. C. Whenever bandwidth usage is above normal and there is no known legitimate activity generating the traffic, you should suspect security issues that generate unusual amounts of traffic, such as denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks. 2. B. At the very least, illegal file sharing could be occurring, and at the worst, this peer-to-peer (P2P) communication could be the result of a botnet. Peer-to-peer botnets differ from normal botnets in their structure and operation. 3. A. Also known as ICMP sweeps, ping sweeps use ICMP to identify all live hosts by pinging all IP addresses in the known network. All devices that answer are up and running. 4. D. Locating unauthorized software cannot be done by using Task Manager. 5. B. You can sometimes locate processes that are using either CPU or memory by using Task Manager, but again, many malware programs don’t show up in Task Manager. Either Process Explorer or some other tool may give better results than Task Manager. 6. B. The System File Checker (SFC) is a utility built into Windows 10 that checks for and restores corrupt operating system files. 7. C. Any unexpected outbound traffic should be investigated, regardless of whether it was discovered as a result of network monitoring or as a result of monitoring the host or application. With regard to the application, it can mean that data is being transmitted back to the malicious individual. 8. A. Event Viewer displays the Application log, an event log dedicated to errors and issues related to applications. 9. D. Beaconing is a network-related IoC.

Review Questions 1. Beaconing. This type of traffic could be generated by compromised hosts that are attempting to communicate with (or call home) the malicious party that compromised the host. 2. Possible answers include the following: Bandwidth consumption Beaconing Irregular peer-to-peer communication Scan/sweep Unusual traffic spike Common protocol over non-standard port

3.

Terms

Definitions

Beaconing

Traffic that leaves a network at regular intervals

Data exfiltration

Data loss through the network

Rogue device

Device you do not control

IoC

Behavior that indicates a possible compromise

4. Application log. Events in this log are classified as error, warning, or information, depending on the severity of the event. 5. Possible answers include the following:

Processor consumption Drive capacity consumption Unauthorized software Malicious process Unauthorized change Unauthorized privilege Data exfiltration Abnormal OS process behavior

These are behaviors of a single system rather than network symptoms. 6.

Terms

Definitions

Peertopeer botne t

Botnet in which devices that can be reached externally are compromised and run server software that turns them into command and control servers for the devices that are recruited internally that cannot communicate with the command and control server operating externally

Tradi tiona l botne t

Botnet in which all the zombies communicate directly with the command and control server, which is located outside the network

Wirel ess key logge r

Collects information and transmits it to the criminal via Bluetooth or Wi-Fi

Wirel

Not only can alert you when any unknown device is in

ess intru sion preve ntion syste m (WIP S)

the area (APs and stations) but can take a number of actions

7. Process Explorer. Process Explorer is a Sysinternals tool that enables you to see in the Notification area the top CPU offender, without requiring you to open Task Manager. 8. Possible answers include the following: Anomalous activity Introduction of new accounts Unexpected output Unexpected outbound communication Service interruption Application log

In some cases, symptoms are not present on the network or in the activities of the host operating system, but they are present in the behavior displayed by a compromised application. 9.

Terms

Ping sweep

Definitions

Uses ICMP to identify all live hosts by pinging all IP addresses in the known network

Port scan

Attempts to connect to every port on each device and report which ports are open, or “listening”

Vulnerab ility scan

Locates vulnerabilities in systems

Uncrede ntialed scan

Scanner lacks administrative privileges on the device it is scanning

10. uncredentialed scan. The good news is that uncredentialed scans expose less information than credentialed scans.

CHAPTER 18 Do I Know This Already? 1. A. One of the most widely used packet analyzers is Wireshark. It captures raw packets off the interface on which it is configured and allows you to examine each packet. If the data is unencrypted, you can read the data. 2. D. One of the most well-known password cracking programs is Cain and Abel. It can recover passwords by sniffing the network; crack encrypted passwords using dictionary, bruteforce, and cryptanalysis attacks; record VoIP conversations; decode scrambled passwords; reveal password boxes; uncover cached passwords; and analyze routing protocols. 3. C. Cellebrite has found a niche by focusing on collecting evidence from smartphones. 4. B. Maintenance costs are lower because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). 5. D. Using forensic tools for the virtual environment does not require access to the hypervisor code. In fact, you will not

have access to that code as you are a licensed user and not the owner of the code. 6. C. Legal holds often require that organizations maintain archived data for longer periods. Data on a legal hold must be properly identified, and the appropriate security controls should be put into place to ensure that the data cannot be tampered with or deleted. 7. A. One of the tasks you will be performing as a security professional is making copies of storage devices. For this you need a disk imaging tool. 8. D. SHA-3, the latest version, is actually a family of hash functions, each of which provides different functional limits. 9. D. Forensic Explorer is a data carving tool that searches for signatures. It offers carving support for more than 300 file types. It supports Cluster-based file carving Sector-based file carving Byte-based file carving

10. C. Cellebrite has found a niche by focusing on collecting evidence from smartphones. Review Questions 1. tcpdump is a command-line tool that can capture packets on Linux and Unix platforms. A version for Windows, windump, is available as well. 2. Possible answers include the following: Cain and Abel Jack the Ripper

In the process of executing a forensic investigation, it may be necessary to crack passwords. Often files have been encrypted or password protected by malicious individuals, and you need to attempt to recover the password. 3.

Terms

Definitions

Le gal ho ld

Often requires that organizations maintain archived data for longer periods

Ha shi ng

Process used to determine the integrity of files

Ca rvi ng

Forensic technique used when only fragments of data are available and when no file system metadata is available

tcp du m p

A command-line tool that can capture packets on Linux and Unix platforms

4. dcfldd. By simply using dd with the proper parameters and the correct syntax, you can make an image of a disk, but dcfldd enables you to also generate a hash of the source disk at the same time. 5. Possible answers include the following: Memdump: This free tool runs on Windows, Linux, and Solaris. It simply creates a bit-by-bit copy of the volatile memory on a system.

KnTTools: This memory acquisition and analysis tool used with Windows systems captures physical memory and stores it to a removable drive or sends it over the network to be archived on a separate machine. FATKit: This popular memory forensic tool automates the process of extracting interesting data from volatile memory. FATKit helps an analyst visualize the objects it finds to help in understanding the data that the tool was able to find.

Many penetration testing tools perform an operation called a core dump or memory dump. Hackers can use memoryreading tools to analyze the entire memory content used by an application. 6.

Terms

Definitions

Forensic Toolkit (FTK)

A commercial toolkit that can scan a hard drive for all sorts of information

Helix

Live CD with which you can acquire evidence and make drive images

John the Ripper

Password cracker that can work in Linux or Unix as well as macOS

dd

Linux command that is used is to convert and copy files

7. smartphones. Cellebrite makes extraction devices that can be used in the field and software that does the same things. 8. Possible answers include the following: Installation costs are low because there is no installation and configuration for the client to complete.

Maintenance costs are low because there is only one centralized component to maintain, and it is maintained by the vendor (not the end client). Upgrades are included in a subscription. Costs are distributed among all customers. It does not require the client to provide onsite equipment.

However, a considerable disadvantage to the cloud-based approach is that the data is resident with the provider. 9.

Terms

Definitions

M e m du m p

Free tool that runs on Windows, Linux, and Solaris and simply creates a bit-by-bit copy of the volatile memory on a system

K nT To ol s

Memory acquisition and analysis tool used with Windows systems

F A T Ki t

Memory forensic tool that automates the process of extracting interesting data from volatile memory

Q ua lys

A cloud-based vulnerability scanner

10. Legal holds. Data on a legal hold must be properly identified, and the appropriate security controls should be put into place to ensure that the data cannot be tampered with or deleted.

CHAPTER 19 Do I Know This Already? 1. B. Privacy relates to rights to control the sharing and use of one’s personal information. This type of information is called personally identifiable information (PII). 2. B. A privacy impact assessment (PIA) is a risk assessment that determines risks associated with PII collection, use, storage, and transmission. A PIA should determine whether appropriate PII controls and safeguards are implemented to prevent PII disclosure or compromise. 3. A. As part of prevention of privacy policy violations, any contracted third parties that have access to PII should be assessed to ensure that the appropriate controls are in place. In addition, third-party personnel should be familiarized with organizational policies and should sign non-disclosure agreements (NDAs). 4. A. Sensitivity is a measure of how freely data can be handled. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. 5. B. The Payment Card Industry Data Security Standard (PCI DSS) affects any organizations that handle cardholder information for the major credit card companies. The latest version is 3.2.1. 6. D. The Health Insurance Portability and Accountability Act (HIPAA), also known as the Kennedy-Kassebaum Act, affects all healthcare facilities, health insurance companies,

and healthcare clearinghouses. It is enforced by the Office of Civil Rights (OCR) of the Department of Health and Human Services (HHS). 7. A. Encryption and cryptography are technologies that comprise a technical control that can be used to provide the confidentiality objective of the CIA triad. 8. B. Cryptography in the form of hashing algorithms provides a way to assess data integrity. 9. B. Data masking means altering data from its original state to protect it. Two forms of masking are encryption (storing the data in an encrypted form) and hashing (storing a hash value, generated from the data by a hashing algorithm, rather than the data itself). Review Questions 1. value. Assigning a value to data allows an organization to determine the resources that should be used to protect the data. 2. Possible answers including the following: Will you be able to recover the data in case of disaster? How long will it take to recover the data? What is the effect of this downtime, including loss of public standing?

Data is considered essential when it is critical to the organization’s business. 3.

Terms

Definitions

Sensit

A measure of how freely data can be handled

ivity Critic ality

A measure of the importance of the data

Geofe ncing

The application of geographic limits to where a device can be used

Data sover eignty

The concept that data stored in digital format is subject to the laws of the country in which the data is located

4. data retention. A retention policy usually identifies the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data destruction, the data types covered by the policy, and the retention schedule. 5. Possible answers include the following: If the data subject has given consent to the processing of his or her personal data To fulfill contractual obligations with a data subject, or for tasks at the request of a data subject who is in the process of entering into a contract To comply with a data controller’s legal obligations To protect the vital interests of a data subject or another individual To perform a task in the public interest or in official authority For the legitimate interests of a data controller or a third party, unless these interests are overridden by interests of the data subject or her or his rights according to the Charter of Fundamental Rights (especially in the case of children)

6.

Terms

Definitions

Tokeni zation

Another form of data hiding or masking in that it replaces a value with a token that is used instead of the actual value

Digital waterm arking

Involves embedding a logo or trademark in documents, pictures, or other objects

AACS

Protects Blu-ray and HD DVD content, though hackers have been able to obtain the encryption keys to this system

PCI DSS

Affects any organizations that handle cardholder information for the major credit card companies

7. Data masking means altering data from its original state to protect it. Two forms of masking are encryption and hashing. 8. Possible answers include the following: Using substitution tables and aliases for the data Redacting or replacing the sensitive data with a random value Averaging or taking individual values and averaging them (adding them and then dividing by the number of individual values) or aggregating them (totaling them and using only the total value) Encrypting the data Hashing the data

9.

Terms

Definitions

HIPAA (Health Insurance Portability and Accountability Act)

Legislation affecting healthcare facilities

SOX (SarbanesOxley Act)

Affects any organization that is publicly traded in the United States

GLBA (GrammLeach-Bliley Act)

Affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers

CFAA (Computer Fraud and Abuse Act)

Affects any entities that might engage in hacking of “protected computers,” as defined in the act

10. Geofencing. Geofencing depends on the use of Global Positioning System (GPS) or radio frequency identification (RFID) technology to create a virtual geographic boundary.

CHAPTER 20 Do I Know This Already? 1. C. The four main steps of the business impact analysis (BIA) are as follows: 1. Identify critical processes and resources. 2. Identify outage impacts and estimate downtime. 3. Identify resource requirements. 4. Identify recovery priorities.

2. B. Risk assessment (or analysis) has four main goals: Identify assets and asset value.

Identify vulnerabilities and threats. Calculate threat probability and business impact. Balance threat impact with countermeasure costs.

3. B. Single loss expectancy (SLE) is the monetary impact of each threat occurrence. To determine the SLE, you must know the asset value (AV) and the exposure factor (EF). The EF is the percentage value or functionality of an asset that will be lost when a threat event occurs. The calculation for obtaining the SLE is as follows: SLE = AV × EF 4. C. The non-technical leadership audience needs the message to be put in context with their responsibilities. This audience needs the cost of cybersecurity expenditures to be tied to business performance. 5. A. Risk avoidance consists of terminating the activity that causes a risk or choosing an alternative that is not as risky. 6. B. Certification evaluates the technical system components, whereas accreditation occurs when the adequacy of a system’s overall security is accepted by management. 7. B. The first step is to obtain management support, which is critical to both the support of the program and its budget. 8. A. Compensative controls are put in place to substitute for a primary access control and mainly act to mitigate risks. By using compensative controls, you can reduce risk to a more manageable level. 9. B. The red team acts as the attacking force. It typically carries out penetration tests by following a well-established process of gathering information about the network, scanning the network for vulnerabilities, and then attempting to take advantage of the vulnerabilities. Review Questions

1. Business Continuity Planning (BCP) committee. The BIA relies heavily on any vulnerability analysis and risk assessment that is completed. 2. The four main steps of the BIA are as follows: 1. Identify critical processes and resources. 2. Identify outage impacts and estimate downtime. 3. Identify resource requirements. 4. Identify recovery priorities.

The BIA helps the organization to understand what impact a disruptive event would have on the organization. 3.

Terms

Definitions

BI A

Lists the critical and necessary business functions, their resource dependencies, and their level of criticality to the overall organization

R ed te a m

Acts as the attacking force during testing

Bl ue te a m

Acts as the network defense team during testing

W hi te te

Group of technicians who referee the encounter during testing

a m

4. Quantitative risk analysis. Equations are used to determine total and residual risks. An advantage of quantitative over qualitative risk analysis is that quantitative uses less guesswork than qualitative. 5. 5000.00. The calculation for obtaining the SLE is as follows: SLE = AV × EF (20,000.00 × .25 = 5000.00) 6.

Terms

Definitions

Tabletop exercise

An informal brainstorming session that encourages participation from business leaders and other key employees

Business Continuity Planning (BCP) committee

Performs vulnerability analysis and risk assessment

Organizational governance

Process of controlling an organization’s activities, processes, and operations

7. risk assessment matrix. Subject experts grade all risks based on their likelihood and impact. 8. Possible answers include the following: Risk avoidance: Terminating the activity that causes a risk or choosing an alternative that is not as risky

Risk transfer: Passing on the risk to a third party, such as an insurance company Risk mitigation: Defining the acceptable risk level the organization can tolerate and reducing the risk to that level Risk acceptance: Understanding and accepting the level of risk as well as the cost of damages that can occur

9.

Terms

Definitions

M O U

Document that, while not legally binding, indicates a general agreement between the principals to do something together

S L A

Document that specifies a service to be provided by a party

B C P

Performs vulnerability analysis and risk assessment

B I A

Functional analysis that occurs as part of business continuity and disaster recovery

10. ALE = SLE × ARO. The annual loss expectancy (ALE) is the expected risk factor of an annual threat event. To determine the ALE, you must know the single loss expectancy (SLE) and the annualized rate of occurrence (ARO). The ARO is the estimate of how often a given threat might occur annually.

CHAPTER 21

Do I Know This Already? 1. C. The four domains are Business architecture: Business strategy, governance, organization, and key business processes Application architecture: Individual systems to be deployed, interactions between the application systems, and their relationships to the core business processes Data architecture: Structure of an organization’s logical and physical data assets Technology architecture: Hardware, software, and network infrastructure

2. D. NIST SP 800-53 Rev 4 is a security controls development framework developed by the NIST body of the U.S. Department of Commerce. 3. D. A code of conduct policy is one intended to demonstrate a commitment to ethics in the activities of the principles. It is typically a broad statement of commitment that is supported by detailed procedures designed to prevent unethical activities. 4. A. As the name implies, these passwords consist of single words that often include a mixture of upper- and lowercase letters. The advantage of this password type is that it is easy to remember. A disadvantage of this password type is that it is easy for attackers to crack or break, resulting in compromised accounts. 5. A. Managerial controls are implemented to administer the organization’s assets and personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. These controls are commonly referred to as soft controls. Specific examples are personnel controls, data classification, data labeling, security awareness training, and supervision.

6. C. Unlike preventative controls, deterrent controls are designed to discourage but not necessarily prevent malicious activity. 7. A. An SSAE 18 audit results in a Service Organization Control (SOC) 1 report, which focuses on internal controls over financial reporting. 8. A. An SSAE 16 verifies the controls and processes and requires a written assertion regarding the design and operating effectiveness of the controls being reviewed. 9. A. Management or administrative controls are implemented to administer the organization’s assets and personnel and include security policies, procedures, standards, baselines, and guidelines that are established by management. 10. C. Directive controls specify acceptable practices within an organization. They are in place to formalize an organization’s security directive mainly to its employees. The most popular directive control is an acceptable use policy (AUP), which lists proper (and often examples of improper) procedures and behaviors that personnel must follow. Review Questions 1. NIST SP 800-53 Rev 4. The NIST SP 800-53 Rev 4 framework divides the controls into three classes: technical, operational, and management. 2. Possible answers include the following:

Family

Class

Access Control (AC)

Technical

Awareness and Training (AT)

Operational

3.

Audit and Accountability (AU)

Technical

Security Assessment and Authorization (CA)

Management

Configuration Management (CM)

Operational

Contingency Planning (CP)

Operational

Identification and Authentication (IA)

Technical

Incident Response (IR)

Operational

Maintenance (MA)

Operational

Media Protection (MP)

Operational

Physical and Environmental Protection (PE)

Operational

Planning (PL)

Management

Program Management (PM)

Management

Personnel Security (PS)

Operational

Risk Assessment (RA)

Management

System and Services Acquisition (SA)

Management

System and Communications Protection (SC)

Technical

System and Information Integrity (SI)

Operational

Terms

Definitions

Code of conduct/ethi cs

Details standards of business conduct

Acceptable use policy

Describes what can be done by users

Work product retention

Work done for and owned by the organization

NIST SP 800-53

Divides the controls into three classes: technical, operational, and management

4. Answers can include the following: At minimum, perform annual audits to establish a security baseline. Determine your organization’s objectives for the audit and share them with the auditors. Set the ground rules for the audit before the audit starts, including the dates/times of the audit. Choose auditors who have security experience. Involve business unit managers early in the process. Ensure that auditors rely on experience, not just checklists. Ensure that the auditor’s report reflects risks that your organization has identified. Ensure that the audit is conducted properly. Ensure that the audit covers all systems and all policies and procedures. Examine the report when the audit is complete.

5. Possible answers include the following:

Report Type

S O C

What It Reports On

Who Uses It

Internal controls over financial reporting

User auditors and users’ controller office

Security, availability, processing integrity, confidentiality, or privacy controls

Management, regulators, and others; shared NDA

Security, availability, processing integrity, confidentiality, or privacy controls

Publicly available to anyone

1 S O C 2 S O C 3

6.

Terms

Definitions

Control Objectives for Information and Related Technology (COBIT)

Security controls development framework that uses a process model to subdivide IT into four domains

The Open Group Architecture Framework (TOGAF)

An enterprise architecture framework that helps organizations design, plan, implement, and govern an enterprise information architecture

NIST Cybersecurity Framework

Focuses exclusively on IT security

ISO/IEC 27000 Series

A security program development standard on how to develop and maintain an information security management system (ISMS)

7. Logical, or technical. Specific examples of technical controls are firewalls, IDSs, IPSs, encryption, authentication systems, protocols, auditing and monitoring, biometrics, smart cards, and passwords. 8. Possible answers include the following: Password life: How long a password will be valid Password history: How long before a password can be reused Authentication period: How long a user can remain logged in Password complexity: How the password will be structured Password length: How long the password must be

9.

Terms

Definitions

Sherwood Applied Business Security Architecture (SABSA)

Enterprise security architecture framework that uses the six communication questions (What, Where, When, Why, Who, and How) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual)

Information Technology

Process management development standard developed by the Office of Management and

Infrastructure Library (ITIL)

Budget in OMB Circular A-130

Capability Maturity Model Integration (CMMI)

Comprehensive set of guidelines that address all phases of the software development life cycle

National Information Assurance Certification and Accreditation Process (NIACAP)

Provides a standard set of activities, general tasks, and a management structure to certify and accredit systems that maintain the information assurance and security posture of a system or site

10. code of conduct/ethics

Appendix B

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Exam Updates Over time, reader feedback enables Pearson to gauge which topics give our readers the most problems when taking the exams. To assist readers with those topics, the authors create new materials clarifying and expanding on those troublesome exam topics. As mentioned in the Introduction, the additional content about the exam is contained in a PDF on this book’s companion website, at http://www.pearsonitcertification.com/title/9780136747161. This appendix is intended to provide you with updated information if CompTIA makes minor modifications to the exam upon which this book is based. When CompTIA releases an entirely new exam, the changes are usually too extensive to provide in a simple update appendix. In those cases, you might need to consult the new edition of the book for the updated content. This appendix attempts to fill the void that occurs with any print book. In particular, this appendix does the following: Mentions technical items that might not have been mentioned elsewhere in the book Covers new topics if CompTIA adds new content to the exam over time Provides a way to get up-to-the-minute current information about content for the exam

ALWAYS GET THE LATEST AT THE BOOK’S PRODUCT PAGE You are reading the version of this appendix that was available when your book was printed. However, given that the main purpose of this appendix is to be a living, changing document, it is important that you look for the latest version online at the book’s companion website. To do so, follow these steps: Step 1. Browse to www.pearsonitcertification.com/title/9780136747161. Step 2. Click the Updates tab. Step 3. If there is a new Appendix B document on the page, download that document. Note The downloaded document has a version number. Compare the version of the printed Appendix B (Version 1.0) with the latest online version of this appendix and do the following: Same version: Ignore the PDF that you downloaded from the companion website. Website has a later version: Ignore this Appendix B in your book and read only the latest version that you downloaded from the companion website.

TECHNICAL CONTENT The current Version 1.0 of this appendix does not contain additional technical coverage.

Glossary of Key Terms NUMERICS 802.1X A standard that defines a framework for centralized port-based authentication.

A acceptable use policy (AUP) A policy that is used to inform users of the actions that are allowed and those that are not allowed. accreditation Occurs when the adequacy of a system’s overall security is accepted by management. accuracy A description of the correctness of the data. active defense Process of aligning your incident identification and incident response processes such that there is an element of automation built into your reaction to any specific issue. Active Directory (AD) Microsoft implementation of SSO. See also single sign-on (SSO). active enumeration The technique of sending packets of some sort to the network and then assessing responses. active vulnerability scanner A type of scanner that can take action to block an attack, such as block a dangerous IP address. Advanced Access Content System (AACS) Protects Bluray and HD DVD content. Hackers have been able to obtain the encryption keys to this system.

advanced persistent threat (APT) Threat from a highly organized attacker with significant resources that is carried out over a long period of time. Adversary Corner of the Diamond Model that describes the intent of the attack. adware Malware that monitors browsing habits for the purpose of ad targeting. aggregation The process of assembling or compiling units of information at one sensitivity level and having the resultant totality of data being of a higher sensitivity level than the individual components. air gap A device with no network connections and all access to the system must be done manually by adding and removing items with a flash drive or other external device. Aircrack-ng A set of command-line tools for sniffing and attacking wireless networks. analysis The step in the intelligence cycle where data is combed and analyzed to identify relevant pieces of information. annual loss expectancy (ALE) The expected risk factor of an annual threat event. Calculated as the single loss expectancy (SLE) times the annualize rate of occurrence (ARO). annualized rate of occurrence (ARO) The estimate of how often a given threat might occur annually. anti-tamper technology Designed to prevent access to sensitive information and encryption keys on a device. Application log Log that focuses on the operation of Windows applications. Events in this log are classified as error, warning, or information, depending on the severity of the event. application programming interface (API) integration The applications on either end of the API are synchronized and protecting the integrity of the information that passes through

the API. It also enables proper updating and versioning required in many environments. application wrapping Technique to protect mobile devices and the data they contain. Application wrappers (implemented as policies) enable administrators to set policies that allow employees with mobile devices to safely download an app, typically from an internal store. Arachni A Ruby framework for assessing the security of a web application. asset criticality A measure of how essential an asset is to the organization’s business. asset tagging Process of placing physical identification numbers of some sort on all assets. asset value (AV) Value of an asset. Multiplied by the exposure factor (EF) to calculate single loss expectancy (SLE). asymmetric algorithms Algorithms that use both a public key and a private or secret key. The public key is known by all parties, and the private key is known only by its owner. atomic execution A set of instructions either execute in order and in entirety or the changes they make are rolled back or prevented Atomic operations in concurrent programming are program operations that run independently of any other processes (threads). Making the operation atomic consists of using synchronization mechanisms in order to make sure that the operation is seen, from any other thread, as a single, atomic operation. This increases security by preventing one thread from viewing the state of the data when the first thread is still in the middle of the operation. Attack Complexity (AC) CVSS base metric that describes the difficulty of exploiting the vulnerability. attack frameworks Frameworks and methodologies that include security program development standards, enterprise

and security architect development frameworks, security control development methods, corporate governance methods, and process management methods. Attack Vector (AV) CVSS base metric that describes how the attacker would exploit the vulnerability. attack vector A segment of the communication path that an attack uses to access a vulnerability. attestation Process in which the software and platform components have been identified, or “measured,” using cryptographic techniques. attribute-based access control (ABAC) Authentication system that grants or denies user requests based on arbitrary attributes of the user and arbitrary attributes of the object, and environment conditions that may be globally recognized. Authentication Header (AH) IPsec component that provides data integrity, data origin authentication, and protection from replay attacks. authentication period How long a user can remain logged in. authentication server In the 802.1X framework, the centralized device that performs authentication. authenticator In the 802.1X framework, the device through which the supplicant is attempting to access the network. automated malware signature creation A method of identifying malware in which the AV software monitors incoming unknown files for the presence of malware and analyzes the file based on both classifiers of file behavior and classifiers of file content. Availability (A) CVSS base metric that describes the disruption that might occur if the vulnerability is exploited.

B

backdoor/trapdoor A mechanism implemented in many devices or applications that gives the user who uses the backdoor unlimited access to the device or application. Basel II International accord that addresses minimum capital requirements, supervisory review, and market discipline of financial institutions. bash A scripting language that is used to work in the Linux interface. bastion host Device exposed directly to the Internet or to any untrusted network while screening the rest of the network from exposure. beaconing Traffic that leaves a network at regular intervals. big data Sets of data so large or complex that they cannot be analyzed by using traditional data processing applications. blacklisting The process of identifying and blocking as bad senders a list of unacceptable e-mail addresses, Internet addresses, websites, applications, or some other identifier. Occurs when a list of unacceptable e-mail addresses, Internet addresses, websites, applications, or some other identifier is configured as bad senders or as not allowed to send while allowing all others. See also whitelisting. blocks cipher Cipher that performs encryption by breaking the message into fixed-length units. blue team A group of technicians who acts as the network defense team during testing. botnet A type of malware that installs a bot with the ability to connect back to the hacker’s computer. After that, his server controls all the bots located on these machines. bring your own device (BYOD) policy Policy designed to allow personal devices in the network. buffer overflow An attack that occurs when the amount of data that is submitted is larger than the buffer can handle.

Building Automation and Control Networks (BACnet) protocol An application, network, and media access control (MAC) layer communications service. It can operate over a number of Layer 2 protocols, including Ethernet. Burp Suite A suite of tools used for testing web applications. bus encryption Protects the data traversing hardware buses. Business Continuity Planning (BCP) committee Performs vulnerability analysis and risk assessment. business impact analysis (BIA) Lists the critical and necessary business functions, their resource dependencies, and their level of criticality to the overall organization.

C Cain and Abel A well-known password cracking program. call list/escalation list A list of contact information for all individuals, such as first responders, who might need to be alerted during the investigation of an incident. Capability Corner of the Diamond Model that describes the attacker intrusion tools and techniques. Capability Maturity Model Integration (CMMI) A comprehensive set of guidelines that address all phases of the software development life cycle (SDLC). carving Forensic technique used to identify a file when only fragments of data are available and no file system metadata is available. Cellebrite A forensic tool that focuses on collecting evidence from smartphones. certificate authority (CA) The entity in a PKI that creates and signs digital certificates, maintains the certificates, and revokes them when necessary.

certificate revocation list (CRL) A list of expired and revoked certificates. certification Evaluation of the technical system components. change management Formal process for managing change. characteristic factor authentication Authentication based on something the person is. clearing Removing data from the media so that it cannot be reconstructed using normal file recovery techniques and tools. click-jacking An attack that crafts a transparent page or frame over a legitimate-looking page that entices the user to click something. cloud access security broker (CASB) A software layer that operates as a gatekeeper between an organization’s on-premises network and the provider’s cloud environment. COBIT Security controls development framework that uses a process model to subdivide IT into four domains. code of conduct/ethics Details standards of business conduct. cognitive password A type of password that is a piece of information that can be used to verify an individual’s identity. collection The step in the intelligence cycle where data searching and organizing occurs. combination password A type of password that uses a mix of dictionary words, usually two that are unrelated. commodity malware Malware that is widely available for either purchase or by free download. It is not customized or tailored to a specific attack. Common Configuration Enumeration (CCE) SCAP component; configuration best practice statements maintained by the National Institute of Standards and Technology (NIST).

Common Platform Enumeration (CPE) SCAP component; a NIST standardized method of describing methods for describing and classifying operating systems, applications, and hardware devices. Common Vulnerabilities and Exposures (CVE) SCAP component; list of vulnerabilities in published operating systems and applications software. Common Vulnerability Scoring System (CVSS) A system of ranking vulnerabilities that are discovered based on predefined metrics. Common Weakness Enumeration (CWE) SCAP component; an identification scheme for design flaws in the development of software that can lead to vulnerabilities. Communications Assistance for Law Enforcement Act (CALEA) of 1994 Act that requires telecommunications carriers and manufacturers of telecommunications equipment to modify and design their equipment, facilities, and services to ensure that they have built-in surveillance capabilities. community cloud A cloud deployment model in which the cloud infrastructure is shared among several organizations from a specific group with common computing needs. compensating control A type of control that is applied to mitigate the impact or likelihood of an attack; also called a countermeasure. complex password A type of password that includes a mixture of upper- and lowercase letters, numbers, and special characters. Computer Fraud and Abuse Act (CFAA) Affects any entities that engage in hacking of “protected computers,” as defined in the act. Computer Security Act of 1987 The first law to require a formal computer security plan. It was written to protect and

defend the sensitive information in the federal government systems. Superseded in 2002 by the Federal Information Security Management Act (FISMA). confidence level In the context of intelligence sources, a description of the perceived integrity of any particular data. Confidentiality (C) CVSS base metric that describes the information disclosure that may occur if the vulnerability is exploited. configuration baseline A floor or minimum standard that is required. configuration lockdown Prevents any changes to the configuration of a device, even by users who formerly had the right to configure the device. containerization Server virtualization technique in which the kernel allows for multiple isolated user space instances. contamination Intermingling or mixing of data of one sensitivity or need-to-know level with that of another. Content Scrambling System (CSS) Uses encryption to enforce playback and region restrictions on DVDs. continuous deployment/delivery Method to make sure that you can release new changes to your customers quickly in a sustainable way. Continuous deployment goes one step further with every change that passes all stages of your production pipeline being released to your customers. continuous integration Software development practice whereby the work of multiple individuals is combined a number of times a day. control plane Network architecture plane that carries signaling traffic originating from or destined for a router. This is the information that enables routers to share information and build routing tables.

Controller Area Network (CAN bus) Designed to allow vehicle microcontrollers and devices to communicate with each other’s applications without a host computer. copyright Legal protection that ensures that a work that is authored is protected from any form of reproduction or use without the consent of the copyright holder. corporate-owned, personally enabled (COPE) A strategy in which an organization purchases mobile devices and users manage those devices. corrective control A type of control put into place to reduce the effect of an attack or other undesirable event. cracker An individual who attempts to break into secure systems without using the knowledge gained for any nefarious purposes. credential stuffing Entering a large number of spilled credentials automatically into websites until they are potentially matched to an existing account, which the attacker can then hijack for his or her own purposes. credentialed scan A scan performed with administrator access. criticality A measure of the importance of the data. cross-site request forgery (CSRF) An attack that exploits the website’s trust of the browser. The website thinks that the request came from the user’s browser and was actually made by the user. cross-site scripting (XSS) An attack that occurs when an attacker locates a website vulnerability and injects malicious code into the web application.

D data correlation The process of locating variables in the information that seem to be related.

data enrichment A technique that allows one process to gather information from another process or source and then customize a response using the data from the second process or source. data exfiltration The theft of data from a device or network. data loss prevention (DLP) Software that attempts to prevent data leakage. data masking Altering data from its original state to protect it. data plane Network architecture plane that carries user traffic; also known as the forwarding plane. Data Protection API (DPAPI) API that lets you encrypt data using the user’s login credentials. data sovereignty The concept that data stored in digital format is subject to the laws of the country in which the data is located. dd A Linux command that is used is to convert and copy files. debugging A process that steps though the code interactively. decompiling A process that attempts to reconstruct high-level language source code. decomposition The process of breaking down software to discover how it works, perhaps who created it, and, in some cases, how to prevent the software from performing malicious activity. deidentification The process of deleting or masking personal identifiers, such as personal name from a set of data. demilitarized zone (DMZ) A network logically separate from the intranet where resources that will be accessed from the outside world are made available to unauthenticated users. denial-of-service (DoS) attack An attack in which attackers flood a device with enough requests to degrade the performance of the targeted device.

dereference Occurs when a pointer with a value of NULL is used as though it pointed to a valid memory area. destruction The destroying of the media on which data resides. detective control A type of control that is in place to detect an attack while it is occurring. deterrent control A type of control that deters or discourages an attacker. DevSecOps A development concept that grew out of the DevOps approach to software development that emphasizes security in all phases. DHCP snooping Used to prevent a poisoning attack on the DHCP database. Diamond Model of Intrusion Analysis Intrusion analysis model that emphasizes the relationships and characteristics of four basic components: the adversary, capabilities, infrastructure, and victims. digital rights management (DRM) Used to control the use of digital content. digital signature A hash value encrypted with the sender’s private key. digital watermarking Involves embedding a logo or trademark in documents, pictures, or other objects. directive control A type of control that specifies acceptable practice within an organization. directory traversal One of the ways malicious individuals are able to access parts of a directory to which they should not have access. disassembly Reading the machine code into memory and then outputting each instruction as a text string.

dissemination The step in the intelligence cycle where information is shared with those responsible for designing security controls to address issues. DNP3 A master/slave protocol used in building automation that uses port 19999 when using Transport Layer Security (TLS) and port 20000 when not using TLS. DOM XSS XSS attack in which the entire tainted data flow from source to sink takes place in the browser. The source of the data is in the DOM, the sink is also in the DOM, and the data flow never leaves the browser. domain bridging Using as a hotspot a device that has been made a member of the domain, allowing access to the organizational network to anyone using the hotspot. domain generation algorithm (DGA) Algorithm that is used by attackers to periodically generate large numbers of domain names that can be used as rendezvous points with their command and control servers. Domain-based Message Authentication, Reporting, and Conformance (DMARC) An e-mail authentication and reporting protocol that improves e-mail security within federal agencies. DomainKeys Identified Mail (DKIM) Allows e-mail source verification by providing a method for validating a domain name identity that is associated with a message through cryptographic authentication. dual-homed firewall A type of firewall with two interfaces, one pointing to the internal network and another connected to the untrusted network. dynamic analysis Software code analysis done with the code executing. Dynamic ARP Inspection (DAI) A security feature that intercepts all ARP requests and responses and compares each

response’s MAC address and IP address information against the MAC–IP bindings contained in a trusted binding table.

E Economic Espionage Act of 1996 Affects companies that have trade secrets and any individuals who plan to use encryption technology for criminal activities. eFuse Allows for the dynamic real-time reprogramming of computer chips. Electronic Communications Privacy Act (ECPA) of 1986 Affects law enforcement and intelligence agencies; extended government restrictions on wiretaps from telephone calls to include transmissions of electronic data by computer and prohibited access to stored electronic communications. e-mail signature block A set of information such as name, email address, company title, and credentials that usually appears at the end of an e-mail. emanations Electromagnetic signals that are emitted by an electronic device. embedded Functionality that is integrated into a program or a device. embedded link A link embedded in one website that leads to another site. embedded system A piece of software built into a larger piece of software that is in charge of performing some specific function on behalf of the larger system. employee privacy issues and expectation of privacy Concept that organizations must give employees the proper notice of any monitoring that might be used. Encapsulating Security Payload (ESP) IPsec component that provides all that AH does as well as data confidentiality.

EnCase Forensic A case (incident) management tool that offers built-in templates for specific types of investigations. endpoint detection and response (EDR) A proactive endpoint security approach designed to supplement existing defenses. enumeration The process of discovering what is in the network along with any other pieces of information that might be helpful in a network attack or compromise. EU Electronic Security Directive Defines electronic signature principles. executable process analysis Determines what process is using/taxing the CPU. exposure factor (EF) The percentage value or functionality of an asset that will be lost when a threat event occurs. Extensible Access Control Markup Language (XACML) A standard for an access control policy language using XML. Extensible Markup Language (XML) attack An attack that targets the use of XML in a website. In one example, it compromises the application that parses or reads and interprets the XML. If the XML input contains a reference to an external entity and is processed by a weakly configured XML parser, it can lead to the disclosure of confidential data, denial of service, server-side request forgery, and port scanning. This is called an XML external entity attack. external scan A vulnerability scan performed from outside the organization’s network to assess the likelihood of an external attack. extranet A network logically separate from the intranet where resources that will be accessed from the outside world are made available to authenticated users.

F

false negative Occurs when a scanner does not identity a vulnerability that actually exists. false positive Occurs when a scanner identifies a vulnerability that does not exist. FATKit A memory forensics tool that automates the process of extracting interesting data from volatile memory. fault tolerance Provided when a backup component begins operation when the primary component fails. Federal Information Security Management Act (FISMA) of 2002 Requires all federal agencies to develop, document, and implement an agencywide information security program. Federal Intelligence Surveillance Act (FISA) of 1978 The first act to give procedures for the physical and electronic surveillance and collection of “foreign intelligence information” between “foreign powers” and “agents of foreign powers” and applied only to traffic within the United States. It was amended by the USA PATRIOT Act of 2001 and the FISA Amendments Act of 2008. Federal Privacy Act of 1974 Provides guidelines on collection, maintenance, use, and dissemination of PII about individuals that is maintained in systems of records by federal agencies. field programmable gate array (FPGA) A type of programmable logic device (PLD) that is programmed by blowing fuse connections on the chip or using an antifuse that makes a connection when a high voltage is applied to the junction. A PLD is an integrated circuit with connections or internal logic gates that can be changed through a programming process. FIN scan Type of scan that sets the FIN bit only.

firewall Device or software whose purpose is to inspect and control the type of traffic allowed. flow analysis Type of analysis that focuses on ensuring that confidential and private information is isolated from other information. forensic investigation suite A collection of tools that are commonly used in digital forensic investigations. Forensic Toolkit (FTK) A commercial toolkit that can scan a hard drive for all sorts of information. formal method Method of software analysis that follows prescribed procedures. forwarding Routing e-mail through another organization’s email system. framework A methodology designed to help guide security professionals. Function as a Service (FaaS) An extension of Platform as a Service (PaaS) that goes further and completely abstracts the virtual server from the developers. Charges are based not on server instance sizes but on consumption and executions. fuzzing Injecting invalid or unexpected input (sometimes called faults) into an application to test how the application reacts.

G geofencing The application of geographic limits to where a device can be used. geotagging The process of adding geographical identification metadata to various media. Gramm-Leach-Bliley Act (GLBA) of 1999 Affects all financial institutions, including banks, loan companies, insurance companies, investment companies, and credit card providers.

graphical password A type of password that uses graphics as part of the authentication mechanism; also called CAPTCHA password.

H hacker An individual who attempts to break into secure systems to obtain knowledge about the systems and possibly use that knowledge to carry out pranks or commit crimes. hardening Removing unnecessary functions to reduce the attack surface. hardware security module (HSM) An appliance that safeguards and manages digital keys used with strong authentication and provides crypto processing. hashing The process of using a hashing algorithm to reduce a large document or file to a character string that can be used to verify the integrity of the file. Health Care and Education Reconciliation Act of 2010 Affects healthcare and educational organizations. This act increased some of the security measures that must be taken to protect healthcare information. Health Insurance Portability and Accountability Act (HIPAA) Legislation that specifies security protocols for all organizations that handle private health information (PHI). heap overflow A buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Helix A live CD with which you can acquire evidence and make drive images without affecting the data on the host. heuristics Analysis that determines the susceptibility of a system to a particular threat/risk using decision rules or weighing methods.

HIPAA Breach Notification Rule Requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured protected health information (PHI). honeypot A system that is configured to be attractive to hackers and to lure them into spending time attacking it while information is gathered about the attack. host scanning A process that involves identifying the live hosts on a network or in a domain namespace. host-based firewall A type of firewall that resides on a single host and is designed to protect that host only. hunt teaming A proactive threat hunting tactic in which a team works together to detect, identify, and understand advanced and determined threat actors. It is a new proactive approach to security that is offensive in nature rather than defensive, which has been common for security teams in the past. hybrid cloud A cloud deployment model in which an organization provides and manages some resources in-house and has others provided externally via a public cloud.

I imaging Creating a bit-level image of the disk. impact analysis Analysis that determines impact of the event. impersonation Sending e-mail that appears to come from someone else. incident command system (ICS) Designed to provide a way to enable effective and efficient domestic incident management by integrating a combination of facilities, equipment, personnel, procedures, and communications operating within a common organizational structure.

incident form A form that is used to describe the incident in detail. incident response A formal process or set of procedures for responding to cybersecurity incidents. incident summary report A document that summarizes the incident. indicator management Process of collecting and analyzing indicators of compromise (IOCs). indicator of compromise (IOC) Any activity, artifact, or log entry that is typically associated with an attack of some sort. inference Occurs when someone has access to information at one level that allows her to infer information about another level. Infrastructure Corner of the Diamond Model that describes the set of systems an attacker uses to launch attacks. Infrastructure as a Service (IaaS) Cloud service model in which the vendor provides the hardware platform or data center, and the company installs and manages its own operating systems and application systems. infrastructure as code (IaC) Manages and provisions computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. input validation The process of checking all input for issues such as proper format and proper length. insecure object reference A process that occurs when a user has permission to use an application but is accessing information to which she should not have access. integer overflow Occurs when math operations try to create a numeric value that is too large for the available space.

integrated intelligence The consideration and analysis of intelligence data from a perspective that combines multiple data sources and attempts to make inferences based on this data integration. Integrity (I) CVSS base metric that describes the type of data alteration that might occur. intellectual property A tangible or intangible asset to which the owner has exclusive rights. internal scan A vulnerability scan performed from inside the organization’s network to assess the likelihood of an insider attack. Internet Key Exchange (IKE) An IPsec component that provides the authentication material used to create the keys exchanged by ISAKMP during peer authentication. Internet of Things (IoT) Refers to a system of interrelated computing devices, mechanical and digital machines, and objects that are provided with unique identifiers and the ability to transfer data over a network without requiring human-tohuman or human-to-computer interaction. Internet Protocol Security (IPsec) A protocol that provides encryption, data integrity, and system-based authentication. Internet Security Association and Key Management Protocol (ISAKMP) An IPsec component that handles the creation of a security association for the session and the exchange of keys. intrusion detection system (IDS) A system that creates a log of every security event that occurs. intrusion prevention system (IPS) A system that takes action when a security event occurs. Internet Protocol Security (IPsec) A suite of protocols used to create an encrypted connection.

ISO/IEC 27000 Series A family of security program development standards providing guidance on how to develop and maintain an information security management system (ISMS). ISO/IEC 27001:2013 The latest version of the 27001 standard, one of the most popular standards by which organizations obtain certification for information security. It provides guidance on ensuring that an organization’s information security management system (ISMS) is properly built, established, maintained, and continually improved. ISO/IEC 27002:2013 The latest version of the ISO/IEC 27002 standard that provides a code of practice for information security management. isolation/sandboxing Placing malware where it can be safely probed and analyzed. ITIL A process management development standard developed by the Office of Management and Budget in OMB Circular A130.

J Jailbreaking Privilege escalation of an Apple device for the purpose of removing software restrictions imposed by Apple. John the Ripper Password cracker that can work in Unix/Linux as well as macOS. jumpbox A server that is used to access devices that have been placed in a secure network zone such as a DMZ.

K kernel debugger A debugger that operates at ring 0. key escrow The process of storing keys with a third party to ensure that decryption can occur.

key stretching Cryptographic technique that involves making a weak key stronger by increasing the time it takes to test each possible key. kill chain A model that describes the stages of an intrusion. knowledge factor authentication Authentication based on something committed to memory. known threats Threats of which we are aware. KnTTools A memory acquisition and analysis tool used with Windows systems.

L Layer 2 Tunneling Protocol (L2TP) Protocol that operates at Layer 2 of the OSI model. Like PPTP, L2TP can use various authentication mechanisms; however, L2TP does not provide any encryption. It is typically used with IPsec. legacy systems Older systems that may be less secure than newer systems. legal hold A legal requirement placed on an organization to maintain archived data for longer periods for legal proceedings. lessons learned report Lists and discusses what was learned about how and why the incident occurred and how to prevent it from occurring again. logic bomb Type of malware that executes when a particular event takes place. LonWorks/LonTalk3 Peer-to-peer protocol used in building automation; uses port 1679.

M machine learning Capability of software to gather information and make conclusions.

maintenance hook A backdoor account created by programmers to give someone full permissions in a particular application or operating system. management plane Network architecture plane that administers the router. managerial (administrative type) controls A type of control that is implemented to administer the organization’s assets and personnel and includes security policies, procedures, standards, baselines, and guidelines that are established by management. mandatory access control (MAC) Authentication system in which authorization is based on security labels. man-in-the-middle attack An attack that intercepts legitimate traffic between two entities. mantrap A physical access control system that consists of a series of two doors with a small room between them. The user is authenticated at the first door and then allowed into the room. At that point, additional verification occurs. maturity models Process models developed to help develop security skills. maximum tolerable downtime (MTD) The maximum amount of time that an organization can tolerate a single resource or function being down. mean time between failures (MTBF) The estimated amount of time a device will operate before a failure occurs. mean time to repair (MTTR) The average time required to repair a single resource or function. measured boot A term that applies to several technologies that follow the Secure Boot standard. Memdump A free tool that runs on Windows, Linux, and Solaris that simply creates a bit-by-bit copy of the volatile memory on a system.

memorandum of understanding (MOU) Document that, while not legally binding, indicates a general agreement between the principals to do something together. memory dumping Analyzing the entire memory content used by an application. MicroSD HSM A hardware security module that connects to the microSD port on a device that has such a port. microservices A variant of the service-oriented architecture (SOA) structural style that arranges an application as a collection of three loosely coupled services. The focus is on building single-function modules with well-defined interfaces and operations. MITRE ATT&CK Knowledge base of adversary tactics and techniques based on real-world observations. It is an open system, and attack matrices are created for various industries. mobile code Software that is transmitted across a network to be executed on a local system. mobile device management (MDM) A system that is used to control mobile device settings, applications, and other parameters when those devices are attached to the enterprise. Modbus A master/slave protocol used in building automation that uses port 50. multifactor authentication (MFA) An authentication process that requires more than a single authentication factor. multihomed firewall A type of firewall with three interfaces: one connected to the untrusted network, one connected to the internal network, and one connected to the DMZ.

N National Information Assurance Certification and Accreditation Process (NIACAP) A standard set of activities and general tasks, along with a management structure,

to certify and accredit systems that maintain the information assurance and security posture of a system or site. near field communication (NFC) A short-range type of wireless transmission that is used in payment cards such as Apple Pay and Google Pay. Nessus Professional A proprietary network scanner developed by Tenable Network Security. NetFlow A technology developed by Cisco that is supported by all major vendors and can be used to collect and subsequently export IP traffic accounting information. network access control (NAC) A service that goes beyond authentication of the user and includes examination of the state of the computer the user is introducing to the network when making a remote-access or VPN connection to the network. next-generation firewall (NGFW) A category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall, without hampering the performance. Nikto A vulnerability scanner that is dedicated to web servers. NIST Cybersecurity Framework version 1.1 A framework that focuses exclusively on IT security. NIST SP 800-53 Rev 4 A security controls development framework that divides the controls into three classes: technical, operational, and management. NIST SP 800-57 Rev 5 Contains recommendations for key management and is published in three parts. NIST SP 800-128 Provides guidance on implementing endpoint protection platforms (EPPs). Nmap A tool that can be used to scan for open ports and perform many other operations, including performing certain attacks.

Node.js A scripting language framework to write network applications using JavaScript. non-credentialed scan A scan performed without administrator access. null scan A scan that is series of TCP packets that contain a sequence number of 0 and no set flags. numeric password A type of password that includes only numbers.

O oclHashcat A general-purpose computing on graphics processing units (GPGPU)-based multi-hash cracker using a brute-force attack. one-time password (OTP) A type of password that is used only once to log in to the access control system. Online Certificate Status Protocol (OCSP) An Internet protocol that obtains the revocation status of an X.509 digital certificate. OpenID An open standard and decentralized protocol by the nonprofit OpenID Foundation that allows users to be authenticated by certain cooperating sites. OpenIOC An open framework, meant for sharing threat intelligence information in a machine-readable format. open-source intelligence Intelligence sources that are available to all. OpenVAS An open source scanner developed from the Nessus code base, available as a package for many Linux distributions. operational control A type of control that is part of the organizational security stance day to day. organizational governance The process of controlling an organization’s activities, processes, and operations.

output encoding The process of changing data into another form using code. Applied to output to prevent the inclusion of dangerous character types that might be inserted by malicious individuals. overflow attack Occurs when an area of memory of some sort is full and can hold no more information. OWASP Zed Attack Proxy (ZAP) An application that stands between the web server and the client and passes all requests and responses back and forth, while analyzing the information to test the security of the web application. ownership factor authentication Authentication based on something in your possession.

P packet analysis Analysis that examines an entire packet, including the payload. Pacu An exploit framework used to assess and attack Amazon Web Services (AWS) cloud environments. parameterized queries Queries that do not require input values or parameters. passive enumeration The technique of capturing traffic and making educated assumptions from the traffic. passive vulnerability scanner A type of scanner that cannot take action to block an attack, such as block a dangerous IP address. passphrase password A type of password that uses a long phrase. Because of the password’s length, it is easier to remember but much harder to attack. password complexity How the password will be structured. password history How long before a password can be reused. password length How long the password must be.

password life How long a password will be valid. password spraying A technique used to identify the passwords of domain users. Rather than targeting a single account as in a brute-force attack, it targets or “sprays” multiple accounts with the same password attempt. patching Applying updates that fix security or functional issues. patent A right granted to an individual or a company to protect the rights to an invention. Payment Card Industry Data Security Standard (PCI DSS) Standard that affects any organizations that handle cardholder information for the major credit card companies. peer-to-peer botnet A botnet in which devices that can be reached externally are compromised and run server software that turns them into command and control servers for the devices that are recruited internally that cannot communicate with the command and control server operating externally. Perl A scripting language found on all Linux servers. It helps in text manipulation tasks. permissions Access rights granted or denied at the file, folder, or other object level. persistent XSS An XSS attack in which the hacker stores the user input on the target server, such as in a database, in a message forum, a visitor log, a comment field, and so forth, and then a victim is able to retrieve the stored data from the web application without that data being made safe to render in the browser. Also called a stored or Type I attack. personal health information (PHI) The medical records of individuals; also referred to as protected health information. Personal Information Protection and Electronic Documents Act (PIPEDA) Affects how private-sector

organizations collect, use, and disclose personal information in the course of commercial business in Canada. personally identifiable information (PII) Any piece of data that can be used alone or with other information to identify a single person. phishing A social engineering attack in which attackers try to learn personal information, including credit card information and financial data. physical control A type of control that is implemented to protect an organization’s facilities and personnel. ping sweep A scan that uses ICMP to identify all live hosts by pinging all IP addresses in the known network. piping The process of sending the output of one function to another function as its input. Platform as a Service (PaaS) Cloud service model in which the vendor provides the hardware platform or data center and the software running on the platform, including the operating systems and infrastructure software. Point-to-Point Tunneling Protocol (PPTP) Microsoft protocol based on PPP that uses built-in Microsoft Point-toPoint encryption and can use a number of authentication methods, including CHAP, MS-CHAP, and EAP-TLS. policy decision point (PDP) An entity that retrieves all applicable polices in XACML and compares the request with the policies. policy enforcement point (PEP) An entity that protects the resource that the subject (a user or an application) is attempting to access in XACML. port scan A scan that attempts to connect to every port on each device and report which ports are open, or “listening.” port security Allows you to keep a port enabled for legitimate devices while preventing its use by illegitimate devices.

preventative control A type of control that prevents an attack from occurring. privacy Relates to rights to control the sharing and use of one’s personal information. private cloud A cloud deployment model in which a private organization implements a cloud in its internal enterprise, and that cloud is used by the organization’s employees and partners. Privilege Escalation The process of exploiting a bug or weakness in an operating system to allow a user to receive privileges to which she is not entitled. Privileges Required (Pr) A CVSS base metric that describes the authentication an attacker would need to get through to exploit the vulnerability. Process Explorer A Sysinternals tool that enables you to look at the graph that appears in Task Manager and identify what caused spikes in the past, which is not possible with Task Manager alone. processor security extensions A set of security-related instruction codes that are built into some modern central processing units (CPUs). programmable logic controllers (PLCs) Industrial control system (ICS) components that connect to the sensors and convert sensor data to digital data; they do not include telemetry hardware. proprietary systems Solutions have been developed by the organization that do not follow standards. proprietary/closed-source intelligence Intelligence sources that are available to only a select audience. protocol analysis Analysis that examines information in the header of a packet. Prowler A tool that creates reports that list gaps found between the best practices of AWS as stated in CIS Amazon Web

Services Foundations Benchmark 1.1. proximity reader A door control that reads a proximity card from a short distance and is used to control access to a sensitive room. proxy Any device or application that acts as an intermediary for requests from clients seeking resources. public cloud A cloud deployment model in which a service provider makes resources available to the public over the Internet. public key infrastructure (PKI) A collection of systems, software, and communication protocols that distribute, manage, and control public key cryptography. purging A data destruction technique that makes the data unreadable even with advanced forensic techniques. push notification services Services that allow unsolicited messages to be sent by an application to a mobile device even when the application is not open on the device. Python A scripting language that supports procedure-oriented programming and object-oriented programming.

Q qualitative risk analysis Risk analysis that does not assign monetary and numeric values to all facets of the risk analysis process. Qualys A cloud-based vulnerability scanner. quantitative risk analysis Risk analysis that assigns monetary and numeric values to all facets of the risk analysis process, including asset value, threat frequency, vulnerability severity, impact, and safeguard costs. query writing Search functions that help to locate the relevant information in log data.

R race condition An attack in which the hacker inserts himself between instructions, introduces changes, and alters the order of execution of the instructions, thereby altering the outcome. radio frequency identification (RFID) Object-tracking technology that uses radio frequency chips and readers to manage inventory ransomware A type of malware that prevents or limits users from accessing their systems. It is called ransomware because it forces its victims to pay a ransom through certain online payment methods. real user monitoring (RUM) A monitoring method that captures and analyzes every transaction of every application or website user. real-time operating system (RTOS) A system designed to process data as it comes in, typically without buffer delays. Reaver Both a package of tools called Reaver and a tool within the package called Reaver that is used to attack Wi-Fi Protected Setup (WPS). recoverability The ability of a function or system to be recovered in the event of a disaster or disruptive event. recovery point objective (RPO) The point in time to which the disrupted resource or function must be returned. recovery time objective (RTO) The shortest time period after a disaster or disruptive event within which a resource or function must be restored in order to avoid unacceptable consequences. red team A group of technicians who acts as the attacking force during testing. reflective XSS XSS attack in which a web application immediately returns user input in an error message or search result, without that data being made safe to render in the

browser, and without permanently storing the user provided data. registration authority (RA) The entity in a PKI that verifies the requestor’s identity and registers the requestor. relevancy A description of the applicability of the data to a particular threat. remote code execution A category of attack types distinguished by the ability of the hacker to get the local system (user system) to execute code that resides on another machine, which could be located anywhere in the world. remote terminal units (RTUs) Industrial control system (ICS) components that connect to the sensors and convert sensor data to digital data, including telemetry hardware. remote wipe Instructions sent remotely to a mobile device that erase all the data, typically used when a device is lost or stolen. Representational State Transfer (REST) A client/server model for interacting with content on remote systems, typically using HTTP. Responder A tool that can be used for answering NBT and LLMNR name requests. responsive control A type of control that is implemented after an event; also called a recovery control. reverse engineering The process of taking something apart to discover how it works and perhaps to replicate it; retracing the steps in an incident, as seen from the logs. RFID See radio frequency identification (RFID). rights Manage who is allowed to perform certain operations on an entire computer or within a domain, rather than a particular object within a computer.

risk acceptance Understanding and accepting the level of risk as well as the cost of damages that can occur. risk assessment A tool used in risk management to identify vulnerabilities and threats, assess the impact of those vulnerabilities and threats, and determine which controls to implement. risk assessment matrix A table used to assess risks qualitatively. risk avoidance Terminating an activity that causes a risk or choosing an alternative that is not as risky. risk management A formal process that rates identified vulnerabilities by the likelihood of their compromise and the impact of said compromise. risk mitigation Defining the acceptable risk level the organization can tolerate and reducing the risk to that level. risk transfer Passing on the risk to a third party, such as an insurance company. rogue access point An unauthorized AP connected to the organization’s wireless network that the organization does not control and manage. rogue device Device present in the environment that you do not control. rogue endpoint An endpoint device that is not under your control as administrator. role-based access control (RBAC) An authentication system in which users are organized by job role into security groups, which are then granted the rights and permissions required to perform that job. rooting or jailbreaking Attaining root privileges on a smartphone.

rootkit A set of tools that a hacker can use on a computer after she has managed to gain access and elevate her privileges to administrator. Roots of Trust (RoTs) The foundation of assurance of the trustworthiness of a mobile device. Ruby A scripting language that is great for web development. runtime data integrity check The process that ensures the integrity of the peripheral memory contents during runtime execution. runtime debugging The process of using a programming tool to not only identify syntactic problems in code but also discover weaknesses that can lead to memory leaks and buffer overflows.

S SABSA An enterprise security architecture framework that uses the six communication questions (What, Where, When, Why, Who, and How) that intersect with six layers (operational, component, physical, logical, conceptual, and contextual). sandboxing Placing a device or software in an environment separate from the balance of the network. sanitization The process of removing all traces of a threat by overwriting the drive multiple times. Sarbanes-Oxley Act (SOX) Also known as the Public Company Accounting Reform and Investor Protection Act of 2002, affects any organization that is publicly traded in the United States. It controls the accounting methods and financial reporting for the organizations and stipulates penalties and even jail time for executive officers. scope The areas to be included in a scan; determines the impact and is a function of how widespread the incident is. ScoutSuite A data collection tool that allows you to use what are called longitudinal survey panels to track and monitor the

cloud environment. screened host firewall A firewall that is between the final router and the internal network. screened subnet Architecture where two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network. scripting Using scripting languages to automate a process. Secure Boot Authentication method that requires all boot loader components (e.g., OS kernel, drivers) attest to their identity (digital signature) and the attestation is compared to the trusted list. secure enclave A part of an operating system that cannot be compromised even when the operating system kernel is compromised, because the enclave has its own CPU and is separated from the rest of the system. Secure European System for Applications in a Multivendor Environment (SESAME) A project that extended Kerberos’s functionality to fix Kerberos’s weaknesses. Uses both symmetric and asymmetric cryptography to protect interchanged data and uses a trusted authentication server at each host. secure processing A concept that uses a variety of technologies to prevent the processing of sensitive information or alternately to prevent any insecure actions on the part of the CPU or processor. Secure Shell (SSH) An application protocol that is used to remotely log in to another computer using a secure tunnel. Secure Sockets Layer/Transport Layer Security (SSL/TLS) A transport layer protocol that provides encryption, server and client authentication, and message integrity.

secured memory Part of a partition designated as security sensitive. Security Assertions Markup Language (SAML) A security attestation model built on XML and SOAP-based services that allows for the exchange of authentication and authorization data between systems and supports federated identity management. Security Content Automation Protocol (SCAP) A standard that the security automation community uses to enumerate software flaws and configuration issues. security engineering The process of architecting security features into the design of a system or set of systems. security information and event management (SIEM) A type of system that provides an automated solution for analyzing security events and data and deciding where the attention needs to be given. security regression testing A subset of regression testing that validates that changes have not reduced the security of the application or opened new weaknesses. segmentation Involves limiting the scope of an incident by leveraging existing segments of the network as barriers to prevent the spread to other segments. self-encrypting drives Drives that automatically encrypt the contents without user intervention. Sender Policy Framework (SPF) An e-mail validation system that works by using DNS to determine whether an email sent by someone has been sent by a host sanctioned by that domain’s administrator. sensitive personal information (SPI) Refers to information that does not identify an individual, but is related to an individual and communicates information that is private

or could potentially harm an individual should it be made public. sensitivity A measure of how freely data can be handled. service-level agreement (SLA) A document that specifies a service to be provided by a party, the costs of the service, and the expectations of performance. Service Provisioning Markup Language (SPML) An XML-based framework developed by the Organization for the Advancement of Structured Information Standards (OASIS). service-oriented architecture (SOA) An architecture that operates on the theory of providing web-based communication functionality without each application requiring redundant code to be written per application. session hijacking An attack that attempts to place the hacker in the middle of an active conversation between two computers for the purpose of taking over the session of one of the two systems, thus receiving all data sent to that system. Shibboleth An open source project that provides single signon (SSO) capabilities and allows sites to make informed authorization decisions for individual access of protected online resources in a privacy-preserving manner. Simple Certificate Enrollment Protocol (SCEP) Protocol for provisioning certificates to network devices, including mobile devices. Simple Object Access Protocol (SOAP) Protocol specification for exchanging structured information in the implementation of web services in computer networks. single loss expectancy (SLE) The monetary impact of each threat occurrence. Calculated as the asset value (AV) times the exposure factor (EF). single sign-on (SSO) An environment in which a user enters his login credentials once and can access all resources in the

network. sinkhole A router designed to accept and analyze attack traffic that can be used to draw traffic away from a target, to monitor worm traffic, or to monitor other malicious traffic. SOC 1, Type 1 report Service Organization Control report that focusess on the auditors’ opinion of the accuracy and completeness of the data center management’s design of controls. SOC 1, Type 2 report Service Organization Control report that includes Type 1 and an audit on the effectiveness of controls. Software as a Service (SaaS) A cloud service model in which the vendor provides the entire solution, including the operating system, the infrastructure software, and the application. software defined networking (SDN) The decoupling of the control plane and data plane in networking by locating the logic of routers and switches into a central controller and locating simple data forwarding in the physical devices. software development life cycle (SDLC) Provides a predictable framework of procedures designed to identify all requirements with regard to functionality, cost, reliability, and delivery schedule and ensure that each is met in the final solution. spyware/adware Tracks your Internet usage in an attempt to tailor ads and junk e-mail to your interests. SSL/TLS Secure Sockets Layer/Transport Layer Security encryption option for creating VPNs. It works at the application layer of the OSI model. It is used mainly to protect HTTP traffic or web servers. standard word password A type of password that consists of single words that often include a mixture of upper- and

lowercase letters. static analysis Software analysis that is conducted without the software running. static code analysis Code analysis that is conducted without the code executing. static password A type of password that provides a minimum level of security because the password never changes. Sticky MAC A feature that allows a switch to learn the MAC addresses of the devices currently connected to the port and convert them to secure MAC addresses (the only MAC addresses allowed to send on the port). strcpy A function in C++ that copies the C string pointed to by the source into the array pointed to by the destination, including the terminating null character (and stopping at that point). A function that has a reputation for issues. The issue is that if the destination is not long enough to contain the string, an overrun occurs. stream-based cipher A type of cipher that performs encryption on a bit-by-bit basis and uses keystream generators. stress testing A type of testing that determines the workload that an application can withstand. string search A search technique that is used to look within a log file or data stream and locate any instances of that string. Structured Query Language (SQL) injection An attack that inserts, or “injects,” a SQL query as the input data from the client to the application. Structured Threat Information eXpression (STIX) An XML-based programming language that can be used to communicate cybersecurity data among those using the language. supervisory control and data acquisition (SCADA) A system operating with coded signals over communication

channels so as to provide control of remote equipment. supplicant In 802.1X, the user or device requesting access to the network. symmetric algorithm A type of algorithm that uses a private or secret key that must remain secret between the two parties. Each party requires a separate private key. SYN flood An attack where the target is overwhelmed with unanswered SYN/ACK packets. synthetic transaction monitoring A type of proactive monitoring that uses external agents to run scripted transactions against an application. Sysinternals A Windows command-line tool that contains more than 70 tools that can be used for both troubleshooting and security issues. Syslog A protocol that can be used to collect logs from devices and store them in a central location called a Syslog server. system assessment A process whereby systems are fully vetted for potential issues from both a functionality and security standpoint. system hardening A process that ensures that all systems have been hardened to the extent that is possible and still provide functionality. system isolation Isolating systems through the control of communications with the device. System-on-Chip (SoC) An integrated circuit (also known as a “chip”) that integrates all components of a computer or other electronic system.

T tabletop exercise An informal brainstorming session that encourages participation from business leaders and other key employees.

tcpdump A command-line tool that can capture packets on Linux and Unix platforms. technical control A type of control, usually a software or hardware component, that is used to restrict access. telemetry system An industrial control system (ICS) component that connects RTUs and PLCs to control centers and the enterprise. The Open Group Architecture Framework (TOGAF) An enterprise architecture framework that helps organizations design, plan, implement, and govern an enterprise information architecture. threat actor An attacker who takes advantage of a security loophole. threat feed A constantly updating stream of indicators or artifacts derived from a source outside the organization. threat intelligence The process of gathering threat information. threat model A conceptual design that attempts to provide a framework on which to implement security efforts. threat modeling methodology A formal process that enables organizations to identify threats and potential attacks and implement the appropriate mitigations against these threats and attacks. timeliness A description of how recent the data is. time-of-check/time-of-use An attack that attempts to take advantage of the sequence of events that occurs as the system completes common tasks. tokenization A form of data hiding or masking in that it replaces a value with a token that is used instead of the actual value.

total attack surface Comprises all of the points at which vulnerabilities exist. It is critical that the organization have a clear understanding of the total attack surface. trade secret Intellectual property protection that ensures that proprietary technical or business information remains confidential. A trade secret gives an organization a competitive edge. Trade secrets include recipes, formulas, ingredient listings, and so on. trademark Intellectual property protection that ensures that a symbol, a sound, or an expression that identifies a product or an organization is protected from being used by another organization. traditional botnet A type of botnet in which all the zombies communicate directly with the command and control server, which is located outside the network. trend analysis Analysis that focuses on the long-term direction in the increase or decrease in a particular type of traffic or in a particular behavior in the network. Trojan horse A program or rogue application that appears to or is purported to do one thing but actually does another when executed. true negative Occurs when a scanner correctly determines that a vulnerability does not exist. true positive Occurs when a scanner correctly identifies a vulnerability. Trusted Automated eXchange of Indicator Information (TAXII) An application protocol for exchanging cyber threat information (CTI) over HTTPS. trusted execution A collection of features that are used to verify the integrity of the system and implement security policies, which together can be used to enhance the trust level of the complete system.

Trusted Foundry program A program that can help you exercise care in ensuring the authenticity and integrity of the components of hardware purchased from a vendor. Trusted Platform Module (TPM) A security chip installed on a computer’s motherboard that is responsible for protecting symmetric and asymmetric keys, hashes, and digital certificates. Type 1 hypervisor Virtualization software that is installed on hardware directly, which is why it is commonly called a bare metal hypervisor. A guest operating system runs on another level above the hypervisor. Examples include Citrix XenServer, Microsoft Hyper-V, and VMware vSphere. Type 2 hypervisor A hypervisor installed over an existing operating system. Examples include VMware Workstation and Oracle VM VirtualBox.

U U.S. Digital Millennium Copyright Act Imposes criminal penalties on those who make available technologies whose primary purpose is to circumvent content protection technologies. uncredentialed scan A scan in which the scanner lacks administrative privileges on the device it is scanning. Unified Extensible Firmware Interface (UEFI) An open standard interface layer between the firmware and the operating system that requires firmware updates to be digitally signed. United States Federal Sentencing Guidelines of 1991 Provides guidelines to prevent sentencing disparities that existed across the United States. unknown threats Threats of which we are not aware. USA PATRIOT Act Affects law enforcement and intelligence agencies in the United States. Its purpose is to enhance the

investigatory tools that law enforcement can use, including email communications, telephone records, Internet communications, medical records, and financial records. USB on the GO (USB OTG) A specification first used in late 2001 that allows USB devices, such as tablets or smartphones, to act as either a USB host or a USB device. user acceptance testing Testing designed to ensure that security features do not make an application unusable from the user perspective. user and entity behavior analytics (UEBA) A type of cybersecurity analysis that focuses on normal user activities and detects anomalous behavior when there are deviations from the norm. usermode debugger A debugger that has access to only the usermode space of the operating system.

V Victim Corner of the Diamond Model that describes a single victim or multiple victims. virtual desktop infrastructure (VDI) An infrastructure that hosts desktop operating systems within a virtual environment in a centralized server. virtual local-area network (VLAN) A logical subdivision of a switch that segregates ports from one another as if they were in different LANs. virtual private cloud (VPC) A cloud model in which a public cloud provider isolates a specific portion of its public cloud infrastructure to be provisioned for private use. virtual private network (VPN) A connection that allows external devices to access an internal network by creating an encrypted tunnel over the Internet.

virtual SAN A software-defined networking storage method that allows pooling of storage capabilities and instant and automatic provisioning of virtual machine storage. virtual TPM (vTPM) A software object that performs the functions of a TPM chip. virus A self-replicating program that infects software. VM escape An attack that occurs when a guest OS escapes from its VM encapsulation to interact directly with the hypervisor. vulnerability feed An RSS feed dedicated to the sharing of information about the latest vulnerabilities. vulnerability management The process of identification and mitigation of vulnerabilities. vulnerability scan A type of scan that locates vulnerabilities in systems.

W web application firewall (WAF) A firewall that applies rule sets to an HTTP conversation. These rule sets cover common attack types to which these session types are susceptible. Among the common attacks they address are cross-site scripting and SQL injections. web vulnerability scanner A type of scanner used to assess the security of web applications. white team A group of technicians that referees the encounter between the red team and the blue team during testing. whitelisting The process of identifying and allowing as good senders a list of acceptable e-mail addresses, Internet addresses, websites, applications, or some other identifier. wireless intrusion prevention system (WIPS) A system that not only can alert you when any unknown device is in the area (APs and stations) but can take a number of actions.

wireless key logger Collects information and transmits it to the criminal via Bluetooth or Wi-Fi. Wireshark One of the most widely used network packet sniffers. work product retention Work done for and owned by the organization. work recovery time (WRT) The difference between the recovery time objective (RTO) and the maximum tolerable downtime (MTD), which is the remaining time that is left over after the RTO before reaching the MTD. workflow orchestration Sequencing of events based on certain parameters by using scripting and scripting tools. worm A type of malware that can spread without the assistance of the user.

X–Z XMAS scan A type of scan that sets the FIN, PSH, and URG flags. ZAP An interception proxy produced by the Open Web Application Security Project (OWASP). zero-day threat A threat that has no known solution yet.

Index NUMBERS 3DES, 235 802.1X, 389–391, 653

A A (Availability) metric, 27, 656 AACS (Advanced Access Content System), 520, 653 ABAC (attribute-based access control), 143, 225–227, 655 AC (Attack Complexity) metric, 26, 481 acceptable use policy (AUP), 563–564, 572, 653 acceptance of risk, 47, 538, 677 access control lists (ACLs), 12, 47, 182, 458, 510 access control provisioning life cycle, 569 access management. See identity and access management access points, rogue, 336, 475, 678 accounts introduction of, 334, 480 maintenance of, 260 management policy for, 568–569 privileged, 211 accreditation, 270, 653, 681, 683, 685 accuracy, 282, 653 ACK flag, 76

ACLs (access control lists), 12, 47, 182, 458, 510 action factor authentication, 212 Active Cyber Defense Cycle, 246–247 active defense, 246–247, 653 Active Directory (AD), 217–218, 653 active enumeration, 82, 653 active reader/active tag (ARAT), 180 active reader/passive tag (ARPT), 180 active scans, 43–44 active vulnerability scanners (AVSs), 43, 653 ActiveX, 323, 337 AD (Active Directory), 217–218, 653 Adaptive Wireless IPS, 475 addresses, MAC (media access control), 155 limiting, 394 sticky MAC, 394, 682 AddressSanitizer, 332, 493 ADEPT (Adobe Digital Experience Protection Technology), 521 administrative controls, 508, 570. See also individual controls Adobe Digital Experience Protection Technology (ADEPT), 521 Advanced Access Content System (AACS), 520, 653 advanced persistent threats (APTs), 11, 653 adversary capability, 29–30 adware, 325, 681 AEG (automatic exploit generation), 427 AES 128/256-bit encryption, 99, 235 AGCC (Aviation Government Coordinating Council), 15 agent-based scans, 52 agent-based SIEM collection, 362 agentless SIEM collection, 362 aggregation, 340, 654

AH (Authentication Header), 197, 655 AI (artificial intelligence), 426–427 AIK (attestation identity key), 300 air gap, 185, 654 Aircrack-ng, 83, 654 AirDefense, 475 AirMagnet Enterprise, 475 airodump-ng command, 83 AirTight WIPS, 475 Akana, 392 ALE (annual loss expectancy), 535, 654 algorithms asymmetric, 236, 655 DGA (domain generation algorithm), 343, 662 Diffie-Hellman, 198, 236 DSA (Digital Security Algorithm), 246 MD (Message Digest), 239–240 SHA (Secure Hash Algorithm), 240, 499 symmetric, 233–236, 682 block ciphers, 235–236, 656 stream-based ciphers, 234–235, 682 Alibaba Cloud, 87 AlienVault, 365 Amazon Kindle, 521 Amazon Payments, 102 Amazon Web Services (AWS), 87 Android Device Manager, 257 fragmentation, 101 Lost Android app, 257

annual loss expectancy (ALE), 535, 654 annualized rate of occurrence (ARO), 535, 654 Anomali ThreatStream, 426 anomalous behavior/anomaly analysis, 24–25, 334–335, 480 anomaly-based IDSs, 57 anti-malware, 322, 328 anti-tamper technology, 308, 654 Apache Log Viewer, 394 APIs (application programming interfaces) in cloud environments, 131–132 integration of, 424, 654 Apktool, 328 Apple Apple Pay, 101 Configurator, 98 Find My iPhone, 257 Application log, 481, 654 application programming interfaces. See APIs (application programming interfaces) application-based IDSs, 58 application-level proxies, 60, 385 application-related IOCs (indicators of compromise), 480–481 anomalous activity, 480 Application log, 481, 654 introduction of new accounts, 480 service interruption, 481 unexpected outbound communication, 481 unexpected output, 480 applications. See also software assurance behavior, 333–339

anomalous behavior, 334–335 known-good behavior, 333–334 logs, 481 streaming, 208 system, 98 unsigned, 98 vetting process, 258–259 wrapping, 257, 654 APs (access points), 336, 475 APTs (advanced persistent threats), 11, 653 Arachni, 70–496, 654 architecture. See network architecture ArcSight, 364 ARMIS security firm, 105 ARO (annualized rate of occurrence), 535, 654 ARP spoofing, 154 ARPT (active reader/passive tag), 180 artificial intelligence (AI), 426–427 assessments, 683. See also scans/sweeps compliance, 575 definition of, 573 regulatory, 573–574 risk, 532–534 definition of, 677 goals of, 532–533 metrics, 533 qualitative risk analysis, 534, 676 quantitative risk analysis, 534, 676 risk assessment matrix, 537–538

software, 72–76, 272–275 code review, 273–274, 275 dynamic analysis, 74, 286 fuzzing, 75–76 reverse engineering, 75 SDLC (software development life cycle), 72–76 security regression, 273 security testing, 274–275 static analysis, 73–74, 286 stress testing, 272–273 user acceptance testing, 272 asset management, 178–180 asset tagging, 178, 654 critical assets, 42–43, 411–412, 456, 531, 654 data classification policy, 411 sensitivity and criticality, 411 device-tracking technologies, 178–179 high value assets, 441 object-tracking and object-containment technologies, 179– 180 asset value (AV), 534, 654 asymmetric algorithms, 236, 655 AT&T Cybersecurity, 365 atomic execution, 260, 307, 655 Attack Complexity (AC) metric, 26, 653 attack frameworks definition of, 21, 655 Diamond Model of Intrusion Analysis, 22–23, 661 kill chain, 23, 669

MITRE ATT&CK, 21–22, 670 attack surface area, reduction of, 409–410 configuration lockdown, 410, 659 system hardening, 410 attack vector (AV), 26, 31–32, 412–413, 653, 655 attacks. See also threat classification; threat intelligence backdoors/trapdoors, 338, 656 buffer overflow, 337 credential stuffing, 152–154, 660 DDoS (distributed denial-of-service) attacks, 337, 472 directory traversal, 151–152, 661 DoS (denial-of-service), 183, 337, 472, 661 dumpster diving, 336 emanations, 337 file system file integrity monitoring, 340–341 terminology for, 339–340 identity theft, 336 impersonation, 154, 666 malware. See malware man-in-the-middle, 154–155, 205, 669 mobile code, 337 overflow, 147–150 buffer, 147–149, 656 definition of, 672 heap, 150, 665 integer, 149–150, 667 password spraying, 152, 673

phishing/pharming, 335, 369–370, 674 privilege escalation, 152 remote code execution, 150, 677 rogue access points, 336, 678 rogue endpoints, 336 rootkit, 159–160, 678 servers, 337–338 services, 338–339 session hijacking, 158, 681 shoulder surfing, 336 social engineering, 335–336 SQL injection, 145–146, 682 SYN flood, 490 time-of-check/time-of-use, 260, 684 virtualization, 203–206 VLAN-based, 156–158 XML (Extensible Markup Language), 143–144, 663 XSS (cross-site scripting), 160–162 definition of, 660 DOM (document object model), 162, 662 example of, 160–161 persistent, 161, 673 reflective, 161, 677 attestation AIK (attestation identity key), 300 definition of, 655 measured boot and, 310–311, 670 attribute-based access control (ABAC), 143, 225–227, 655

audits audit reduction tools, 231 compliance, 575 definition of, 573 regulatory, 573–574 AUP (acceptable use policy), 563–564, 572, 653 authentication, 277–285 authentication period, 566, 655 biometric considerations, 282–284 certificate-based, 284–285 context-based, 277–279 IEEE 802.1X, 281–282 MFA (multifactor authentication), 211–214 authentication factors, 212 characteristic factors, 212, 214, 657 definition of, 670 identification versus authentication, 211–212 knowledge factors, 212, 213, 669 ownership factors, 212, 213, 672 network authentication protocols, 279–280 vulnerabilities in, 164 Authentication Header (AH), 197, 655 authentication servers, 281, 655 802.1X, 389 RADIUS (Remote Authentication Dial-in User Service), 389– 391 TACACS+ (Terminal Access Controller Access Control System Plus), 389–391 authenticators, 281, 389, 655

authenticity, hardware, 544 authorization, 233 automated malware signature creation, 424, 655 automated static analysis engine, 328 automatic exploit generation (AEG), 427 automation, 104. See also IoT (Internet of Things) AI (artificial intelligence), 426–427 API integration, 424, 654 automated malware signature creation, 424, 655 data enrichment, 425, 660 machine learning, 426–427, 669 scripting, 423 standards and protocols continuous deployment/delivery, 428, 659 continuous integration, 428, 659 SCAP (Security Content Automation Protocol), 44, 49, 426–427 threat feed, 426, 683 workflow orchestration, 422–423, 687 automation systems building, 109 threats to, 113 AV (asset value), 534, 654 AV (attack vector), 26, 31–32, 412–413, 653, 655 availability, 27, 510, 656 Aviation Government Coordinating Council (AGCC), 15 aviation sector, data sharing in, 15 avoidance of risk, 47, 538, 678 AVSs (active vulnerability scanners), 43, 653 AWS (Amazon Web Services), 87

AWStats, 394 AXELOS, 561 Azure, 87

B backdoors, 338, 656 BACnet (Building Automation and Control Networks), 111, 117, 656 bandwidth consumption, 472 BandwidthD, 472 Barnes and Nobles Nook, 521 BAS (building automation systems), 109 Base metric group (CVSS), 26–27 Basel II, 513, 656 baselines, 45–46, 333, 659 bash, 423, 656 bastion hosts, 61, 188–189, 656 BCP (Business Continuity Planning) committees, 531, 657 bcrypt, 134 beaconing, 473, 656 behavior. See system behavior behavioral analysis, 24–25 benchmarks, 333 BIA (business impact analysis), 530–532 critical processes and resources, 531 definition of, 657 outage impact and downtime, 531 recovery priorities, 531–532 resource requirements, 531 big data, 135–136, 656 binary files, changes to, 500

Binary Guard True Bare Metal, 393 binding, 299 biometric technologies, 282–284 BIOS, flashing, 309 BitBlaze Malware Analysis Service, 393 BitLocker/BitLocker to Go, 300 BitMeter OS, 472 black hats, 406 black-box testing, 274–275 blacklisting, 275, 381, 656 blind signatures, 245 block ciphers, 235–236, 656 Blowfish, 235 blue teams, 542, 656 Bluetooth hacking gear, 475 boot sector viruses, 324 booting, secure, 265, 303, 310–311 botnets, 325, 473–474, 656 bridging, domain, 103–104, 662 bring your own device (BYOD) policies, 97–98, 656 British Standard 7799 (BS7799), 556 broken authentication, 164 buffer overflow, 147–149, 337, 656 Building Automation and Control Networks (BACnet), 111, 117, 656 building automation systems (BAS), 109 Burp Suite, 69, 656 buses CAN (Controller Area Network), 112, 659 encryption, 311, 656 business classifications, 412

Business Continuity Planning (BCP) committees, 531, 657 business impact analysis. See BIA (business impact analysis) business process interruption, 62, 539 BYOD (bring your own device) policies, 97–98, 656

C C (Confidentiality) metric, 27, 659 CA (certificate authority), 243, 258, 285, 371, 657 /CACHESIZE=X switch (SFC), 341 Cain and Abel, 491, 657 calculation of risk, 534–535 calculators, CVSS (Common Vulnerability Scoring System), 29 CALEA (Communications Assistance for Law Enforcement Act), 512, 658 call lists, 454, 657 CAM (content-addressable memory), 155 CAN (Controller Area Network) bus, 112, 659 CAP (Cyber Intelligence Analytics Platform) v2.0, 6 Capability Maturity Model Integration (CMMI), 561, 657 CAPTCHA passwords, 154, 565 Carbon Black CB Response, 387 cars, smart, 104. See also IoT (Internet of Things) carving, 500, 657 CASB (cloud access security broker), 229, 657 cat command, 367 categories definition of, 570 managerial, 570 operational, 571 technical, 571 cause-and-effect rules, 363

CCE (Common Configuration Enumeration), 427, 658 CCTV (closed-circuit television), 107–108 Cellebrite, 494, 657 Center for Internet Security (CIS), 413 central security breach response, 265–266 centralized VDI model, 207 CER (crossover error rate), 283 certificate authority (CA), 657 certificate management, 242–246 CA (certificate authority), 243, 258, 285, 371, 657 certificate-based authentication, 284–285 CRLs (certificate revocation lists), 244, 657 cross-certification, 245 digital signatures, 245–246, 661 OSCP (Online Certificate Status Protocol), 244, 672 PKI (public key infrastructure), 198, 245, 284–285 RA (registration authority), 243, 677 Verisign, 244 X.509 certificates, 243–244 certification, system/software, 270, 539, 657 certification exam preparation. See exam preparation process CFAA (Computer Fraud and Abuse Act), 511, 658 chain of custody, 498 Challenge Handshake Authentication Protocol (CHAP), 279– 281 change management, 201–208, 464, 657 Channel services, 8–9 CHAP (Challenge Handshake Authentication Protocol), 279– 281 characteristic factor authentication, 214, 657

checksums, 237 CIA (confidentiality, integrity, and availability), 42, 411, 510 ciphers block, 235–236, 656 stream-based, 234–235, 682 circuit-level proxies, 60, 385 CIS (Center for Internet Security), 413 CISA (Cybersecurity and Infrastructure Security Agency), 15 Cisco Adaptive Wireless IPS, 475 Cisco Check Point, 353–355 Cisco Meraki, 98 Cisco Systems Manager, 98 Cisco Talos IP, 24 Citrix, 203, 311 classifications, threat. See threat classification clearing data, 461, 657 click-jacking, 262, 657 client-based application virtualization, 208 client/server platforms, 263 closed-circuit television (CCTV), 107–108 closed-source intelligence, 6 cloud access security broker (CASB), 229, 657 cloud computing API security, 131–132 big data, 135–136, 656 cloud-based scanning, 495–496 community cloud, 126, 658 deployment models, 126 FaaS (Function as a Service), 128–129, 665 hybrid cloud, 126, 666

IaC (Infrastructure as Code), 130 key management, 132–134 key escrow, 133 key stretching, 134 principles of, 132–133 logging and monitoring, 136 mitigations, 177–178 on-premises versus, 177 private cloud, 126, 675 public cloud, 126, 675 service models, 127–128 storage threats, 134–135 VPC (virtual private cloud), 195, 686 cloud infrastructure assessment tools, 86–88 Pacu, 87–88, 673 Prowler, 87, 675 ScoutSuite, 87, 679 CMaaS (Continuous Monitoring as a Service), 414 CMI (copyright management information), 444 CMMI (Capability Maturity Model Integration), 561, 657 COBIT (Control Objectives for Information and Related Technologies), 553, 657 code of conduct/ethics, 563, 658 code reuse, 166 code review, 273–274, 275, 286–287 coding, secure, 275–285 authentication, 277–285 authentication period, 566, 655 biometric considerations, 282–284

certificate-based, 284–285 context-based, 277–279 definition of, 233, 655 IEEE 802.1X, 281–282 MFA (multifactor authentication), 211–214 network authentication protocols, 279–280 vulnerabilities in, 164 data protection, 285 input validation, 275–276, 382 output encoding, 276, 672 parameterized queries, 285, 673 session management, 276–277 cognitive passwords, 565, 658 collection, 8, 13 combination passwords, 564 Combine threat feed, 426 commands aircrack-ng, 83 airodump-ng, 83 cat, 367 dcfldd, 492–493 dd, 492–493, 660 grep, 366 hping, 80–82 hping3, 80–82 less, 367 nmap, 76–79 port security mac-address, 394

reaver, 84–85 SFC, 340–341 strcpy, 168, 682 switchport mode access, 157 switchport mode trunk, 157 switchport port security, 394 wash, 85–86 commercial business classifications, 411 commodity malware, 14, 658 Common Configuration Enumeration (CCE), 427, 658 Common Platform Enumeration (CPE), 427, 658 Common Vulnerabilities and Exposures (CVE), 165, 427, 658 Common Vulnerability Scoring System (CVSS), 44, 412 Common Weakness Enumeration (CWE), 44, 427, 658 communication plans, 435–436, 536–537. See also response coordination Communications Assistance for Law Enforcement Act (CALEA), 512, 658 community cloud, 126, 658 Comodo Automated Analysis System and Valkyrie, 393 companion viruses, 324 compartmented security mode (MAC), 228 compensating controls, 47, 658 complex passwords, 564, 658 compliance audits/assessments, 575 components, vulnerabilities in, 165–166 compromise, indicators of. See IOCs (indicators of compromise) Computer Fraud and Abuse Act (CFAA), 511, 658 Computer Security Act, 512, 658 concentrators, VPN, 196 conditional access, 257

conduct, code of, 563, 658 confidence levels, 7, 659 confidentiality, 27, 42, 233, 411, 412, 510, 659 configurations, 377 802.1X, 389–391, 653 baselines, 45–46, 659 blacklisting, 381 development/rule writing, 392 DLP (data loss prevention), 386–387, 660 EDR (endpoint detection and response), 387, 663 firewalls, 59–62, 383 architecture of, 61–62 comparison of, 385 definition of, 383, 664 host-based, 384–385, 666 NGFWs (next-generation firewalls), 383–384, 671 types of, 59–61, 383–385 input validation, 382 IPS rules, 386 lockdown of, 410, 659 malware signatures, 391–392 NAC (network access control), 387–389, 671 permissions, 381, 673 port security, 394, 674 enabling, 394 MAC addresses, limiting, 394 sticky MAC, 394, 682 profiles and payloads for, 256

sandboxing, 392–394 sinkholing, 391, 681 vulnerabilities in, 167–168 whitelisting, 381 containerization, 208–209, 256, 659 containment, 458–459 isolation, 459, 668, 683 segmentation, 458–459 contamination, 340, 659 Content Scrambling System (CSS), 520, 659 content-addressable memory (CAM), 155 context-based authentication, 277–279 continuous deployment/delivery, 428, 659 continuous improvement, 413–414 continuous integration, 428, 659 continuous monitoring, 413–414, 569–570 Continuous Monitoring as a Service (CMaaS), 413–414 control categories, 570, 571. See also specific controls administrative, 508 corrective, 572, 659 detective, 572, 661 deterrent, 572, 661 directive, 572, 661 managerial, 570, 669 operational, 571, 672 physical, 572, 674 preventative, 572, 674 responsive, 677 technical, 571, 683

control configuration, 377 802.1X, 389–391, 653 blacklisting, 381 development/rule writing, 392 DLP (data loss prevention), 386–387, 660 EDR (endpoint detection and response), 387, 663 firewalls, 59–62, 383 architecture of, 61–62 comparison of, 385 definition of, 383, 664 host-based, 384–385, 666 NGFWs (next-generation firewalls), 383–384, 671 types of, 59–61, 383–385 input validation, 382 IPS rules, 386 malware signatures, 391–392 NAC (network access control), 387–389, 671 permissions, 381, 673 port security, 394, 674 enabling, 394 MAC addresses, limiting, 394 sticky MAC, 394, 682 sandboxing, 392–394 sinkholing, 391, 681 whitelisting, 381 control flow graphs, 73 Control Objectives for Information and Related Technologies (COBIT), 553, 657

control plane, 193, 659 controlled security mode (MAC), 229 Controller Area Network (CAN) bus, 112 COPE (corporate-owned, personally enabled) policy, 256, 659 copyright management information (CMI), 444 copyrights, 444, 659 core dump, 493–494 corporate information, 444–445 corporate-owned, personally enabled (COPE) policy, 256, 659 corrective controls, 572, 659 correlation, 458, 660 CPE (Common Platform Enumeration), 427, 658 crackers, 405, 660 CRCs (cyclic redundancy checks), 237 CREATE TABLE statement, 145 credential stuffing, 152–154, 660 credentialed scans, 51, 660 credit card readers, 102 critical infrastructure sector, data sharing in, 15 criticality, 411, 439–445 analysis of, 457 corporate information, 444–445 critical assets, 411–412, 456, 531 commercial business classifications, 411 data classification policy, 411 distribution of critical assets, 412 military and government classifications, 412 sensitivity and criticality, 411 definition of, 660 financial information, 441–442

high value assets, 441 intellectual property, 442–444 copyright, 444, 659 definition of, 667 patents, 442–443, 673 security for, 444 trade secrets, 443, 684 trademarks, 443, 684 PHI (protected health information), 55, 436, 440–441, 674 PII (personally identifiable information), 55, 436, 439–440, 674 SPI (sensitive personal information), 441, 680 CRLs (certificate revocation lists), 244, 657 cross-certification, 219, 245 crossover error rate (CER), 283 cross-site request forgery (CSRF), 261–262, 660 cross-site scripting. See XSS (cross-site scripting) cryptography. See encryption cryptoperiod, 660 CS&C (Office of Cybersecurity and Communications), 8 CSRF (cross-site request forgery), 261–262, 660 CSS (Content Scrambling System), 520, 659 CTI (cyber threat information), 8 CVE (Common Vulnerabilities and Exposures), 165, 427, 658 CVSS (Common Vulnerability Scoring System), 44, 412 calculators, 29 metric groups, 25–29 CWE (Common Weakness Enumeration), 44, 427, 658 Cyber Intelligence Analytics Platform (CAP) v2.0, 6 cyber threat information (CTI), 8

Cybereason Total Enterprise Protection, 387 Cybersecurity and Infrastructure Security Agency (CISA), 15 cyclic redundancy checks (CRCs), 237 CYFIRMA, 6

D DAI (Dynamic ARP Inspection), 154, 662 Dalvik Executable (.dex/.odex) format, 328 dashboard, SIEM, 363–365 data analysis availability, 510 data acquisition, 501 e-mail analysis, 367–372 digital signatures, 371 DKIM (DomainKeys Identified Mail), 368, 662 DMARC (Domain-based Message Authentication, Reporting, and Conformance), 369, 662 e-mail signature blocks, 372, 662 e-mail spoofing, 368 embedded links, 372, 663 forwarding, 370 impersonation, 372 malicious payloads, 368 phishing/pharming, 335, 369–370 spam, 370 SPF (Sender Policy Framework), 369, 680 endpoint, 321–341 definition of, 321 malware, 323–329

memory, 329–332 NIST SP 800–128, 322–323 system and application behavior, 333–339 UEBA (user and entity behavior analytics), 24, 341 heuristics, 320 impact analysis, 361 definition of, 361, 666 immediate versus total impact, 361 impact modeling, 32 organization versus localized impact, 361 log review, 345–360 event logs, 346–350 firewall logs, 353–355 IDSs (intrusion detection systems), 357–360 IPSs (intrusion prevention systems), 357–360 Kiwi Syslog Server, 352 proxy servers, 356–357 syslog, 350–352 WAF (web application firewall), 355–356 network, 342–345 DGA (domain generation algorithm), 343, 662 DNS (domain name system) analysis, 342–343 flow analysis, 345, 664 NetFlow analysis, 342–346 packet analysis, 342–343, 673 protocol analysis, 343, 675 URL (uniform resource locator) analysis, 342 query writing, 366–367

piping, 367, 674 scripts, 366, 679 Sigma, 366 string searches, 366, 682 reverse engineering, 75, 327–329, 457 SIEM (security information and event management) system, 48, 166, 361–365, 426, 458 agent-based collection, 362 agentless collection, 362 dashboard, 363–365 known-bad Internet Protocol, 363 rule writing, 362–363 trend analysis, 320, 684 data classification, 439, 508, 510 commercial business, 411 distribution of critical assets, 412 military and government, 412 policy, 411 security level classification, 455 sensitivity and criticality, 411 data correlation, 458, 660 data criticality. See criticality data encryption key (DEK), 308 data enrichment, 425, 660 data exfiltration, 479, 660 data exposure, 165 data flow analysis, 73 data haven, 514 data integrity, 233, 298, 456, 510

data loss prevention (DLP), 386–387, 660 data masking, 516–517, 660 data minimization, 515 data mining warehouses, 340 data ownership policy, 567 data plane, 193, 660 data privacy access controls, 521 definition of, 505–508 non-technical controls, 508–516 PIA (privacy impact assessment), 508 security versus, 505–508 technical controls, 516–521 data protection, 285 Data Protection API (DPAPI), 131, 660 Data Protection Directive (EU), 514, 663 data remnants, 204 data retention policy, 509, 567–568 data sensitivity, 411, 439 data sovereignty, 514–515, 660 data storage nonremovable storage, 99 removable storage, 99 uncontrolled storage, 99 vulnerabilities with, 99–100 data types, 53, 509–510 dcfldd command, 492–493 dd command, 492–493, 660 DDoS (distributed denial-of-service) attacks, 337, 472 debugging, 332, 457, 493–494, 660, 678

decompiling, 457, 661 decomposition, 328, 661 dedicated security mode (MAC), 228 deep packet inspection, 60 Deepviz Malware Analyzer, 393 default configurations, vulnerabilities in, 167–168 degrading functionality, 62, 539 deidentification, 517, 661 DEK (data encryption key), 308 Deleaker, 332, 494 delivery, continuous, 428, 659 demilitarized zone (DMZ), 61, 181, 661 Deming’s Plan-D-Check-Act cycle, 413–414 denial-of-service (DoS) attacks, 183, 337, 472, 661 Department of Homeland Security (DHS), 8 deployment cloud deployment models, 126 continuous, 428, 659 diagrams of, 186–192 dereferencing, 163, 661 design, software, 267–268 destruction of data, 461, 661 detection and analysis, 34, 454–458 data correlation, 458, 660 data integrity, 456 downtime and recovery time, 455–456 economic impact, 456–457 improvement of, 413–414 reverse engineering, 457 scope, 455

security level classification, 455 system process criticality, 457 detective controls, 572, 661 deterrent controls, 571, 572, 661 Detux Sandbox, 393 development/rule writing, 392 Device Manager (Android), 257 devices, mobile. See mobile devices DevOps, 270–272 DevSecOps, 270–272, 661 dex2jar, 328 DGA (domain generation algorithm), 343, 662 DHCP (Dynamic Host Configuration Protocol) snooping, 154, 661 DHS (Department of Homeland Security), 8 diagrams, network, 186–192 Diamond Model of Intrusion Analysis, 22–23, 661 Diffie-Hellman algorithm, 198, 236 digital certificates, 284–285 digital forensics carving, 500, 657 cloud-based scanning, 495–496 data acquisition, 501 endpoint, 490–493 FTK (Forensic Toolkit), 491, 664 Helix3, 491, 666 imaging utilities, 492–493 password-cracking utilities, 491–492 hashing, 499–500 legal holds, 497, 669

memory, 493–494 mobile, 494 network, 488–490 tcpdump, 490, 683 Wireshark, 488–490 procedures, 497–499 EnCase Forensic, 498 forensic investigation suites, 498–499, 664 Sysinternals, 498 virtualization, 497 Digital Millennium Copyright Act (DMCA), 517, 685 digital rights management. See DRM (digital rights management) Digital Security Algorithm (DSA), 246, 371 Digital Signature Standard (DSS), 246, 371 digital signatures, 245–246, 371, 661 digital watermarking, 521, 661 directive controls, 571, 572, 661 directory traversal, 151–152, 661 disassemblers/disassembly, 328, 457 disclosure of information, 435 discovery scans, 54 disks. See hard drives disposal, secure, 460–461 dissassembly, 661 dissemination, 14, 662 diStorm3, 329 distributed denial-of-service (DDoS) attacks, 337, 472 Distributed Network Protocol 3 (DNP3), 117, 662 distribution of critical assets, 412

DKIM (DomainKeys Identified Mail), 368, 662 DLP (data loss prevention), 386–387, 516, 660 DMARC (Domain-based Message Authentication, Reporting, and Conformance), 369, 662 DMCA ( Digital Millennium Copyright Act), 685 DMCA (Digital Millennium Copyright Act), 517 DMZ (demilitarized zone), 61, 181, 661 DNP3 (Distributed Network Protocol 3), 117, 662 DNS (Domain Name System) analysis, 342–343 DNSSEC (Domain Name System Security Extensions), 302 document DRM (digital rights management), 520 document object model (DOM) XSS, 162, 662 documentation, 305, 453–454, 543 documented compensating controls, 541–542 DOM (document object model) XSS, 162, 662 domain bridging, 103–104, 662 domain generation algorithm (DGA), 343, 662 Domain Name System (DNS) analysis, 342–343 Domain Name System Security Extensions (DNSSEC), 302 Domain Reputation Center, 24 Domain-based Message Authentication, Reporting, and Conformance (DMARC), 369, 662 DomainKeys Identified Mail (DKIM), 368, 662 DoS (denial-of-service) attacks, 183, 337, 472, 661 double tagging, 157 downtime, 455–456, 531 DPAPI (Data Protection API), 131, 660 drive capacity consumption, 477 drive-by compromise, 22 DRM (digital rights management), 517–521 definition of, 661

document, 520 e-book, 521 movie, 520 music, 520 video game, 520 watermarking, 521, 661 drones, 113 Dropbox, 99 DSA (Digital Security Algorithm), 236, 246, 371 DShield, 247 DSS (Digital Signature Standard), 246, 371 DTP (Dynamic Trunking Protocol), 156, 475 dual-homed firewalls, 61, 189–190, 662 dual-key cryptography, 236 dumping, memory, 332, 670 dumpster diving, 336 dynamic analysis, 74, 286, 662 Dynamic ARP Inspection (DAI), 154, 662 Dynamic Host Configuration Protocol (DHCP) snooping, 661 dynamic packet filtering, 60 dynamic passwords, 565 Dynamic Trunking Protocol (DTP), 156, 475

E EAP (Extensible Authentication Protocol), 279–281, 389–391 Early Launch Anti-Malware driver, 310 e-book DRM (digital rights management), 521 ECC (Elliptical Curve Cryptography), 236 ECDSA (Elliptic Curve DSA), 246, 371 Economic Espionage Act, 513, 662 economic impact analysis, 456–457

ECPA (Electronic Communications Privacy Act), 512, 662 edb-debugger, 329 EDR (endpoint detection and response), 387, 663 education. See training/education EEPROM (electrically erasable PROM), 266 EF (exposure factor), 534, 663 eFuse, 303, 662 EK (endorsement key), 300 El Gamal, 236 electrically erasable programmable read-only memory (EPROM), 309 electrically erasable PROM (EEPROM), 266 Electronic Communications Privacy Act (ECPA), 512, 662 Electronic Security Directive (EU), 514, 663 Elliptic Curve DSA (ECDSA), 246, 371 Elliptical Curve Cryptography (ECC), 236 e-mail analysis, 367–372 digital signatures, 371 DKIM (DomainKeys Identified Mail), 368, 662 DMARC (Domain-based Message Authentication, Reporting, and Conformance), 369, 662 e-mail review, 74, 274 e-mail signature blocks, 372, 662 e-mail spoofing, 368 embedded links, 372, 663 forwarding, 370, 664 impersonation, 372 malicious payloads, 368 phishing/pharming, 335, 369–370 spam, 370

SPF (Sender Policy Framework), 369, 680 viruses, 324 emanations, 337, 663 embedded links, 372, 663 embedded systems, 105–265, 663 employee privacy issues, 513, 663 Encapsulating Security Payload (ESP), 197, 663 EnCase Forensic, 498, 663 encoding, 276, 672 encryption, 232–242, 510 AES 128/256-bit, 99 asymmetric algorithms, 236 bus, 311, 656 certificate management, 242–246 CA (certificate authority), 243, 258, 285, 371, 657 CRLs (certificate revocation lists), 244, 657 cross-certification, 245 digital signatures, 245–246, 661 OSCP (Online Certificate Status Protocol), 244, 672 PKI (public key infrastructure), 198, 245, 284–285 RA (registration authority), 243, 677 Verisign, 244 X.509 certificates, 243–244 cryptoperiod, 660 data privacy and, 516 dual-key cryptography, 236 hashing, 238–240, 665 MD (Message Digest) Algorithm, 239–240 message digests, 238

one-way, 238–239 SHA (Secure Hash Algorithm), 240 hybrid, 236–237 key management, 132–134 key escrow, 133 key stretching, 134 principles of, 132–133 security services provided by, 232–233 self-encrypting drives, 308 SHA (Secure Hash Algorithm), 499 symmetric algorithms, 233–236 block ciphers, 235–236, 656 stream-based ciphers, 234–235, 682 tools for, 499 transport, 240–242 endorsement key (EK), 300 endpoint detection and response (EDR), 663 endpoint security, 321–341 definition of, 321 digital forensics, 490–493 FTK (Forensic Toolkit), 491, 664 Helix3, 491, 666 imaging utilities, 492–493 password-cracking utilities, 491–492 DLP (data loss prevention), 386 EDR (endpoint detection and response), 387, 663 malware, 323–329 automated malware signature creation, 424, 655

botnets, 325, 473–474, 656 commodity malware, 14, 658 logic bombs, 325, 669 ransomware, 326, 676 reverse engineering, 75, 327–329, 457 rootkits, 326 signatures, 391–392 spyware/adware, 325 Trojan horses, 325, 684 viruses, 115, 323–324, 686 worms, 324, 687 memory, 329–332 dumping, 332 protection of, 329–330 runtime data integrity check, 330, 678 runtime debugging, 332, 660, 678 secured, 330 NIST SP 800–128, 322–323 rogue endpoints, 336 system and application behavior, 333–339 anomalous behavior, 334–335 exploit techniques, 335–339 known-good behavior, 333–334 UEBA (user and entity behavior analytics), 24, 341 ENISA (European Union Agency for Network and Information Security), 15 enrollment time, 282 enumeration, 44, 76–82, 427

active versus passive, 82, 653, 673 definition of, 76 host scanning, 79, 666 hping, 80–82 Nmap, 76–79, 671 Responder, 82, 677 environmental threats, 10 EPROM (electrically erasable programmable read-only memory), 266, 309 eradication, 459–462 capability and service restoration, 462 log verification, 462 patching, 461 permissions restoration, 461 reconstruction/reimaging, 460 resource reconstitution, 462 sanitization, 460, 679 secure disposal, 460–461 erasable programmable read-only memory (EPROM), 266 error handling input validation errors, 149 vulnerabilities in, 163 escalation lists, 454, 657 escape, VM, 203 escrow, key, 133 ESP (Encapsulating Security Payload), 197, 663 /etc/passwd file, 567 /etc/shadow file, 567 ethics, code of, 563, 658 EU (European Union)

Data Protection Directive, 514, 663 Electronic Security Directive, 514, 663 ENISA (European Union Agency for Network and Information Security), 15 GDPR (General Data Protection Regulation), 425 privacy laws in, 514 event logs, 346–350 evidence retention, 463 exam preparation process, 579 exam information, 579–580 exam updates, 651–652 online testing, 580 tips and guidelines for, 580–581 tools for chapter-ending review tools, 582–583 final review/study, 583 memory tables, 582 Pearson Test Prep practice test software, 582 executable process analysis, 407–408, 663 exfiltration of data, 479, 660 exploit techniques, 335–339 file system, 339–341 rogue access points, 336, 678 rogue endpoints, 336 servers, 337–338 services, 338–339 social engineering, 335–336 exposure factor (EF), 534, 663

Extensible Access Control Markup Language (XACML), 143– 144, 220, 663 Extensible Authentication Protocol (EAP), 279–281, 389–391 Extensible Markup Language (XML) attacks, 143–144, 663 external scans, 53, 663 external stakeholders, 437 external threat actors, 29–30 extranets, 181, 663

F FaaS (Function as a Service), 128–129, 200, 665 facility access control, 107–109 false acceptance rate (FAR), 283 false negatives, 45, 664 false positives, 44, 664 false rejection rate (FRR), 283 FATKit, 332, 493, 664 fault tolerance, 532, 664 FBI (Federal Bureau of Investigation), threat actor categories, 12–13 feature extraction, 282 Federal Information Security Management Act (FISMA), 513, 664 Federal Intelligence Surveillance Act (FISA), 512, 664 Federal Privacy Act, 512, 664 federation, 219–224 models for, 219–220 OpenID, 222–223, 672 SAML (Security Assertion Markup Language), 221–222, 287, 680 Shibboleth, 224, 681

SPML (Service Provisioning Markup Language), 220 XACML (Extensible Access Control Markup Language), 220 feedback, 14 feeds, vulnerability, 49 FEMA ICS (Incident Command System), 114 FGPA (field programmable gate array), 105–106 field programmable gate array (FPGA), 664 file infectors, 324 file systems changes or anomalies in, 479–480 exploit techniques for, 339–340 Hadoop Distributed File System, 136 monitoring, 340–341 file/data analysis tools, 393 FIN flag, 76 FIN scans, 78, 664 financial information, 441–442 financial sector, data sharing in, 15 Financial Services Information Sharing and Analysis Center (FS-ISAC), 15, 166 Financial Services Modernization Act, 15 Find My iPhone, 257 fingerprinting, 327 FireEye, 9, 387 firewalls, 59–62 architecture of, 61–62 comparison of, 385 definition of, 383, 664 logs, 353–355 Cisco Check Point, 353–355

WAF (web application firewall), 355–356 Windows Defender, 353 multihomed, 671 personal, 322 types of, 59–61, 383–385 application-level proxies, 60, 385 bastion hosts, 188–189 circuit-level proxies, 60, 385 dual-homed, 189–190 host-based, 384–385, 666 kernel proxy firewalls, 385 multihomed, 190–191, 671 NGFWs (next-generation firewalls), 383–384, 671 packet-filtering firewalls, 59, 385 screened host, 192, 679 WAF (web application firewall), 686 firmware, 266, 308–309 FISA (Federal Intelligence Surveillance Act), 512, 664 FISMA (Federal Information Security Management Act), 513, 664 Flash memory, 309 flashing the BIOS, 309 flow analysis, 345, 664 Fluke Networks AirMagnet Enterprise, 475 Forensic Explorer, 500 forensic investigation suites, 498–499, 664 Forensic Toolkit (FTK), 491, 664 formal code review, 73, 286–287 formal review, 273 forwarding e-mail, 370, 664

FPGA (field programmable gate array), 664 frameworks, 552–562 definition of, 665 prescriptive, 555–562 ISO 27000 Series, 556–559 ITIL, 561, 668 maturity models, 561–562, 670 NIST Cybersecurity Framework version 1.1, 555–556 SABSA, 559–560, 679 risk-based, 552–554 COBIT, 553, 657 NIST SP 800–55 Rev 1, 552–553 TOGAF (The Open Group Architecture Framework), 554 FreeMeter Bandwidth Monitor, 472 FRR (false rejection rate), 283 FS-ISAC (Financial Services Information Sharing and Analysis Center), 15, 166 FTK (Forensic Toolkit), 491, 664 Function as a Service (FaaS), 128–129, 200, 665 functions, vulnerabilities in, 168. See also commands fuzzing, 75–76, 665

G GDPR (General Data Protection Regulation), 425, 514 general-purpose computing on graphics processing units (GPGPU), 86 generation-based fuzzing, 75 geofencing, 180, 521, 665 geographic access requirements, 521 geolocation, 179

geotagging, 100–101, 179, 665 GLBA (Gramm-Leach-Bliley Act), 15, 55, 511, 665 glossary, 653–687 Google Cloud Platform, 87 Google Pay, 101, 102 governance, organizational, 62, 672 government agencies classifications in, 412 data sharing among, 15 GPG (GNU Privacy Guard), 134 GPGPU (general-purpose computing on graphics processing units), 86 GPS (Global Positioning System), 179, 521 GPT (GUID partition table), 303 Gramm-Leach-Bliley Act (GLBA), 15, 55, 511, 665 graphical passwords, 565, 665 gray hats, 406 gray-box testing, 274–275 Greenbone console, 71 grep command, 366 Group Policy, 45, 184, 381, 570 GUID partition table (GPT), 303 Guidance Software EnCase Endpoint Security, 387

H hacking, 405, 665 hacking gear, 475 hacktivists, 12 Hadoop, 136 hard drives digital forensics for, 491–492

disk space consumption, 477 self-encrypting, 308 hardening, 46–47, 410, 665, 683 hardware assurance anti-tamper technology, 308, 654 bus encryption, 311, 656 eFuse, 303, 662 RoTs (Roots of Trust), 298–299 HSM (hardware security module), 302, 665 microSD HSM (hardware security module), 302–303, 670 TPM (Trusted Platform Module), 299–300, 684 VTPM (virtual Trusted Platform Module), 300–301 secure processing atomic execution, 307 definition of, 305, 679 processor security extensions, 307, 675 secure enclave, 307, 679 TE (Trusted Execution), 305 self-encrypting drives, 308 trusted firmware updates, 308–309 attestation, 300, 310–311, 655 IMA (Integrity Measurement Architecture), 311 measured boot, 310–311, 670 measured launch, 311 Trusted Foundry program, 304–305, 544 UEFI (Unified Extensible Firmware Interface), 303–304, 685 hardware security module (HSM), 302, 665 hardware source authenticity, 544

hardware/embedded device analysis, 264–265 Hash-based Message Authentication Code (HMAC), 131 hashing, 238–240, 327, 499–500, 665 MD (Message Digest) Algorithm, 239–240 message digests, 238 one-way, 238–239 SHA (Secure Hash Algorithm), 240 Health and Human Services, Department of, 55, 511 Health Care and Education Reconciliation Act, 513, 665 Health Information Sharing and Analysis Center (H-ISAC), 15 Health Insurance Portability and Accountability Act (HIPAA), 15, 55, 436, 511, 666 healthcare sector, data sharing in, 15 heap overflow, 150, 665 heating, ventilation, and air conditioning (HVAC) systems, 111 Helix3, 491, 666 heuristics, 25, 320, 666 HHS (Health and Human Services), Department of, 55, 511 HIDS (host-based IDS), 58 high value assets, 441 HIPAA (Health Insurance Portability and Accountability Act), 15, 55, 436, 511, 666 HIPS (host-based IPS), 360 H-ISAC (Health Information Sharing and Analysis Center), 15 HMAC (Hash-based Message Authentication Code), 131 honeypots, 230, 666 horizontal privilege escalation, 152 host scanning, 79, 666 host-based firewalls, 384–385, 666 host-based IDS, 58 host-based IPS, 360

hosted VDI model, 207 hostile threat actors, 30 host-related IOCs (indicators of compromise), 477–480 abnormal OS process behavior, 479 data exfiltration, 479, 660 drive capacity consumption, 477 file system changes or anomalies, 479–480 malicious processes, 478 memory consumption, 477 processor consumption, 477 unauthorized changes, 479 unauthorized privileges, 479 unauthorized scheduled tasks, 480 unauthorized software, 477–478 HP Mobility Security IDS/IPS, 475 RFProtect, 475 hping, 80–82 hping3, 80–82 HSM (hardware security module), 302, 665 HTMLEncode, 261 HTTP (Hypertext Transfer Protocol), 241–242 HTTPS (HTTP Secure), 241–242 hub and spoke model, 9 human resources, response coordination by, 437 human threat actors, 9 Hunt Project, 158 hunt teaming, 247, 406, 666 HVAC controllers, 111 hybrid cloud, 126, 666

hybrid encryption, 236–237 Hypertext Transfer Protocol (HTTP), 241–242 Hyper-V, 203 hypervisors, 202–203 hypotheses, 404–405 HyTrust, 311

I I (Integrity) metric, 28, 667 IaaS (Infrastructure as a Service), 127, 667 IaC (Infrastructure as Code), 130, 667 ICMP (Internet Control Message Protocol) sweeps, 476 ICSs (incident command systems), 666 ICSs (industrial control systems), 107–117 IDEA, 235 identity and access management, 209–229 ABAC (attribute-based access control), 143, 225–227 access controls, 521, 569 ACLs (access control lists), 12, 47, 182, 458, 510 AD (Active Directory), 217–218, 653 federation, 219–224 models for, 219–220 OpenID, 222–223, 672 SAML (Security Assertion Markup Language), 221–222, 287, 680 Shibboleth, 224, 681 SPML (Service Provisioning Markup Language), 220 XACML (Extensible Access Control Markup Language), 220 MAC (mandatory access control), 228–229

manual review, 229 MFA (multifactor authentication), 211–214 authentication factors, 212 characteristic factors, 212, 214, 657 definition of, 670 identification versus authentication, 211–212 knowledge factors, 212, 213, 669 ownership factors, 212, 213, 672 privilege management, 211 RBAC (role-based access control), 224–225, 678 relationship identification, 210–211 resource identification, 210 rogue access points, 336, 475 SESAME, 219, 679 SSO (single sign-on), 214–217 advantages and disadvantages of, 214–215 definition of, 681 Kerberos, 215–217 user identification, 210 identity theft, 336 Identity Theft Enforcement and Restitution Act, 511 ID-FF (Liberty Identity Federation Framework), 221 IDSs (intrusion detection systems), 10, 322 definition of, 668 HIPS (host-based IPS), 360 log review, 357–360 Snort, 359 Zeek, 360

IDSs (intrusion prevention systems), 57–58 IEC (International Electrotechnical Commission), 556–559 IEEE (Institute of Electrical and Electronics Engineers), 75, 281–282 IIC (Integrated Intelligence Center), 413 IKEv2 (Internet Key Exchange), 198, 667 IMA (Integrity Measurement Architecture), 310–311 imaging utilities, 393, 492–493, 498, 666 impact analysis, 361 definition of, 666 immediate versus total impact, 361 impact modeling, 32 organization versus localized impact, 361 Impact metric group (CVSS), 27–28 impersonation, 154, 372, 666 improvement, continuous, 413–414 Incident Command System (ICS), 114 incident command systems (ICSs), 666 incident forms, 454, 666 incident response process communication plans, 435–436 containment, 458–459 isolation, 459, 668, 683 segmentation, 458–459 definition of, 666 detection and analysis, 454–458 data correlation, 458, 660 data integrity, 456 downtime and recovery time, 455–456 economic impact, 456–457

reverse engineering, 457 scope, 455 security level classification, 455 system process criticality, 457 eradication and recovery, 459–462 capability and service restoration, 462 log verification, 462 patching, 461 permissions restoration, 461 reconstruction/reimaging, 460 resource reconstitution, 462 sanitization, 460, 679 secure disposal, 460–461 factors contributing to data criticality, 439–445 corporate information, 444–445 financial information, 441–442 high value assets, 441 intellectual property, 442–444, 667 PHI (protected health information), 55, 436, 440–441, 674 PII (personally identifiable information), 55, 436, 439– 440, 674 SPI (sensitive personal information), 441, 680 overview of, 33 post-incident activities, 463–465 change control process, 464 evidence retention, 463 incident response plan updates, 464

incident summary reports, 464–465, 666 IOCs (indicators of compromise), 465 lessons learned reports, 463 monitoring, 465 preparation, 452–454 documentation of procedures, 453–454 testing, 453 training, 452–453 response coordination, 436–438 human resources, 437 internal versus external, 437 law enforcement, 437–438 legal, 436–437 public relations, 437 regulatory bodies, 438 senior leadership, 438 incident summary reports, 464–465, 666 indicator management, 666 indicators of compromise. See IOCs (indicators of compromise) inductance-enabled mobile payment, 102 industrial control systems (ICSs), 107–117 inference, 339, 667 information security continuous monitoring (ISCM), 232 information security management system. See ISMS (information security management system) information sharing and analysis communities, 15 Infrastructure as a Service (IaaS), 127, 667 Infrastructure as Code (IaC), 130, 667 infrastructure management, 242–246

access. See identity and access management active defense, 246–247, 653 asset management, 178–180 asset tagging, 178 critical assets, 42–43, 411–412, 456, 531 device-tracking technologies, 178–179 high value assets, 441 object-tracking and object-containment technologies, 179–180 CASB (cloud access security broker), 229, 657 certificate management, 242–246 CA (certificate authority), 243, 258, 285, 371 CAs (certificate authorities), 258, 285, 371 certificate-based authentication, 284–285 CRLs (certificate revocation lists), 244 cross-certification, 245 digital signatures, 245–246 OSCP (Online Certificate Status Protocol), 244 PKI (public key infrastructure), 198, 245, 284–285 RA (registration authority), 243 Verisign, 244 X.509 certificates, 243–244 change management, 201–208, 464 cloud. See cloud computing containerization, 208–209, 256 encryption. See encryption honeypots, 230, 666 logging. See log review

network architecture, 185–200 physical, 186–192 SDN (software-defined networking), 193–194, 681 serverless, 200 VPC (virtual private cloud), 195 VPNs (virtual private networks), 196–199 segmentation, 180–185, 458–459 definition of, 680 jumpboxes, 183–184, 668 physical, 180–181 scans, 56 system isolation, 184–185 virtual, 182–183 virtualization advantages and disadvantages of, 201–202 application streaming, 208 attacks and vulnerabilities, 203–206 digital forensics for, 497 hypervisors, 202–203 management interface, 205 terminal services, 208 VDI (virtual desktop infrastructure), 207 virtual networks, 205 VMs (virtual machines), 201–204, 497 infrastructure vulnerability scanner, 71–496 inhibitors to remediation, 62–63 initialization vectors (IVs), 236 injection, SQL, 145–146, 682

input validation, 149, 275–276, 382, 667 insecure components, 165–166 insecure object reference, 163, 667 insider threats definition of, 12 intentional, 13 unintentional, 13 Institute of Electrical and Electronics Engineers (IEEE), 75, 281–282 integer overflow, 149–150, 667 integrated circuit cards (ICCs), 213 integrated intelligence, 413, 667 Integrated Intelligence Center (IIC), 413 Integrity (I) metric, 28 integrity, data, 233, 298, 456, 510, 667 Integrity Measurement Architecture (IMA), 310–311 Intel Software Guard Extensions (Intel SGX), 131, 307 Intel Trusted Execution Technology (Intel TXT), 305, 311 intellectual property, 442–444 copyright, 444, 659 definition of, 667 patents, 442–443, 673 security for, 444 trade secrets, 443, 684 trademarks, 443, 684 intelligence. See threat intelligence intelligence cycle, 13–14 intentional insider threats, 13 internal scans, 53, 667 internal stakeholders, 437

internal threat actors, 29–30 International Electrotechnical Commission (IEC), 556–559 International Organization for Standardization. See ISO (International Organization for Standardization) Internet Control Message Protocol (ICMP) sweeps, 476 Internet Key Exchange (IKEv2), 198, 667 Internet of Things (IoT), 103–104, 131, 668 Internet Security Association and Key Management Protocol (ISAKMP), 197, 668 intranets, 181 intrusion detection systems. See IDSs (intrusion detection systems) intrusion prevention systems. See IPSs (intrusion prevention systems) IOCs (indicators of compromise), 465 application-related, 480–481 anomalous activity, 480 Application log, 481, 654 introduction of new accounts, 480 service interruption, 481 unexpected outbound communication, 481 unexpected output, 480 definition of, 7, 25, 469, 667 host-related, 477–480 abnormal OS process behavior, 479 data exfiltration, 479 drive capacity consumption, 477 file system changes or anomalies, 479–480 malicious processes, 478 memory consumption, 477

processor consumption, 477 unauthorized changes, 479 unauthorized privileges, 479 unauthorized scheduled tasks, 480 unauthorized software, 477–478 indicator management, 7–9 OpenIOC (Open Indicators of Compromise), 9, 672 STIX (Structured Threat Information eXpression), 8, 682 TAXII (Trusted Automated eXchange of Indicator Information), 8–9, 684 network-related, 472–476 bandwidth consumption, 472 beaconing, 473, 656 common protocol over non-standard port, 476 peer-to-peer (P2P) communication, 473–474 rogue devices on network, 475, 678 scans/sweeps, 476 traffic spikes, 476 IoT (Internet of Things), 103–104, 131, 668 IP (Internet Protocol) IPsec, 197–199, 242 known-bad IP, 363 video systems, 109–111 iPhone, Find My iPhone, 257 IPSs (intrusion prevention systems), 57–58, 322 definition of, 668 log review, 357–360 rules, 386 Sourcefire, 358

IriusRisk, 406 ISAKMP (Internet Security Association and Key Management Protocol), 197, 668 ISCM (information security continuous monitoring), 232 ISMS (information security management system), 539 ISO (International Organization for Standardization), 556–559, 668 ISO/IEC 27001 standard, 539–541, 562, 668 ISO/IEC 27002 standard, 541 isolation, 459, 668, 683 ITIL framework, 561, 668 IVs (initialization vectors), 236

J Jad Debugger, 329 jailbreaking, 100, 678 Java vulnerabilities, 323, 337 JavaScript Object Notation (JSON), 131, 288 JavaScript vulnerabilities, 323, 337 Javasnoop, 329 John the Ripper, 491, 668 JSON (JavaScript Object Notation), 131, 288 Juggernaut, 158 jumpboxes, 183–184, 668

K Kaspersky, 102 KDC (key distribution center), 215–217 Kennedy-Kassebaum Act. See HIPAA (Health Insurance Portability and Accountability Act) kernel debugger, 457, 668

kernel proxy firewalls, 61, 385 key distribution center (KDC), 215–217 key management, 132–134, 233, 371. See also IKEv2 (Internet Key Exchange) DEK (data encryption key), 308 Kerberos, 215–217 key escrow, 133, 668 key stretching, 134, 669 PKI (public key infrastructure), 198, 236, 245, 284–285, 371, 675 principles of, 132–133 session keys, 234 storage, 300 wireless key loggers, 475, 687 keywords, sticky, 394 kill chain, 23, 669 Kindle, 521 Kiwi Syslog Server, 352 Knapsack, 236 knowledge factor authentication, 212, 213, 669 known threats, 10, 669 known-bad Internet Protocol, 363 known-good behavior, 333–334 KnTTools, 332, 493, 669

L L2TP (Layer 2 Tunneling Protocol), 197, 669 languages, scripting, 423 LANs (local-area networks), 181 launch, measured, 311

law enforcement, response coordination by, 437–438 Layer 2 Tunneling Protocol (L2TP), 197, 669 LDAP (Lightweight Directory Access Protocol), 217 leadership, response coordination by, 438 least privilege, principle of, 338 legacy systems, 62, 669 legal department, response coordination by, 436–437 legal holds, 497, 669 less command, 367 lessons learned reports, 463–464, 669 lexical analysis, 73 Liberty Identity Federation Framework (ID-FF), 221 lightweight code review, 74, 273 Lightweight Directory Access Protocol (LDAP), 217 Link-Local Multicast Name Resolution (LLMNR), 82 links, embedded, 372, 663 Linux dd, 393 Linux passwords, 567 live migration, 205 LLMNR (Link-Local Multicast Name Resolution), 82 local-area networks (LANs), 181 location factor authentication, 212 lockdown, configuration, 410, 659 log review, 230–232, 345–360 Application log, 481, 654 audit reduction tools, 231 cloud computing, 136 event logs, 346–350 firewall logs, 353–355 Cisco Check Point, 353–355

Windows Defender, 353 IDSs (intrusion detection systems), 357–360 IPSs (intrusion prevention systems), 357–360 Kiwi Syslog Server, 352 log analyzers, 394 log management, 230–231 log verification, 48, 462 log viewers, 499 logging vulnerabilities, 166 Measured Boot, 311 NIST SP 800–137, 232 proxy servers, 356–357 syslog, 350–352 WAF (web application firewall), 355–356 logic bombs, 325, 669 logical controls, 571 logical deployment diagrams, 186–192 LonWorks/LonTalk, 117, 669 Lost Android, 257

M MAC (mandatory access control), 228–229 MAC (media access control) addresses limiting, 394 sticky MAC, 394, 682 definition of, 669 overflow, 155 MAC (message authentication code), 239

MacAfee, 111–112 machine learning, 426–427, 669 macro viruses, 324 magnitude, 535 maintenance, software, 269 maintenance accounts, 260 maintenance hooks, 260, 669 malware, 323–329 automated malware signature creation, 424, 655 botnets, 325, 473–474, 656 commodity malware, 14, 658 logic bombs, 325, 669 ransomware, 326, 676 reverse engineering, 75, 327–329, 457 definition of, 327, 677 isolation/sandboxing, 327, 668 software/malware, 327–328 tools for, 328–329 rootkits, 326 signatures, 391–392 spyware/adware, 325, 681 Trojan horses, 325, 684 viruses, 115, 323–324, 686 worms, 324, 687 MAM (mobile application management), 97 managed service accounts, 339 management interface, 205 management plane, 193, 669 managerial controls, 570, 669

mandatory access control (MAC), 228–229 Mandiant, 9 man-in-the-middle attacks, 154–155, 205, 669 mantraps, 108, 670 manual review, 229 many-to-one rules, 363 mapping vulnerabilities, 44 masking data, 516–517, 660 master boot record (MBR), 303 matrix, risk assessment, 537–538 maturity models, 561–562 CMMI (Capability Maturity Model Integration), 561, 657 definition of, 670 ISO/IEC 27001, 562 maximum tolerable downtime (MTD), 455, 670 MBR (master boot record), 303 McAfee, 102 MD (Message Digest) algorithm, 239–240, 499 MDM (mobile device management), 97, 670 mean time between failures (MTBF), 455, 670 mean time to repair (MTTR), 455, 670 measured boot, 310–311, 670 measurement, RTM (Root of Trust for Measurement), 298 Memdump, 332, 493, 670 memorandum of understanding (MOU), 62, 538, 670 memory, 329–332 consumption of, 409, 477 digital forensics for, 493–494 dumping, 332, 670 EEPROM (electrically erasable PROM), 266

EPROM (electrically erasable programmable read-only memory), 266, 309 Flash, 309 overflows, 335 protection of, 329–330 RAM (random-access memory), 329 ROM (read-only memory), 309, 329 runtime data integrity check, 330, 678 runtime debugging, 332, 660, 678 secured, 330, 680 memory cards, 213 memory tables GPT (GUID partition table), 303 how to use, 582 Meraki, 98 MESCM (Microsoft Endpoint Configuration Manager), 570 message authentication code (MAC), 239 Message Digest (MD) algorithm, 239–240, 499 message digests, 238 messaging, text, 103 Metasploit, 87 metrics, risk assessment, 533 MFA (multifactor authentication), 211–214 authentication factors, 212 characteristic factors, 212, 214, 657 definition of, 670 identification versus authentication, 211–212 knowledge factors, 212, 213, 669 ownership factors, 212, 213, 672

microSD HSM (hardware security module), 302–303, 670 microservices, 288–289, 670 Microsoft Application Virtualization, 208 Azure, 87 BitLocker/BitLocker to Go, 300 Hyper-V, 203 Measured Boot, 311 MESCM (Microsoft Endpoint Configuration Manager), 570 SCAP (Security Content Automation Protocol), 74, 286 Sysinternals Autoruns, 393 migration, VMs (virtual machines), 204, 205 military classifications, 412 minimization of data, 515 mitigation. See remediation/mitigation MITRE ATT&CK, 21–22, 670 MMS (Multimedia Messaging Service), 103 mobile devices device-tracking technologies, 178–179 digital forensics for, 494, 499 mobile code, 323, 337 platforms for, 256–266 application, content, and data management, 257 application wrapping, 257, 654 configuration profiles and payloads, 256 containerization, 256 COPE (corporate-owned, personally enabled) policy, 256, 659 NIST SP 800–163 Rev 1, 258–259

POCE (personally owned, corporate-enabled) policy, 256 remote wiping, 257, 677 SCEP (Simple Certificate Enrollment Protocol), 258, 681 threats and vulnerabilities, 97–103 Android fragmentation, 101 BYOD (bring your own device) policies, 97–98, 656 device loss/theft, 100 geotagging, 100–101 malware, 102 MAM (mobile application management), 97 MDM (mobile device management), 97, 670 payment technologies, 101–102 push notification services, 100, 675 rooting/jailbreaking, 100, 678 SMS/MMS messaging, 103 storage concerns, 99–100 system apps, 98 unauthorized domain bridging, 103–104 unsigned apps, 98 USB (universal serial bus), 102 mobile hacking gear, 475 Mobile Wallet, 102 Mobility Security IDS/IPS, 475 Modbus, 117, 118, 670 models maturity CMMI (Capability Maturity Model Integration), 561, 657 ISO/IEC 27001, 562

threat, 29–32, 406–407 adversary capability, 29–30 attack vectors, 31–32, 412–413 impact, 32 probability, 32 total attack surface, 31, 684 models, threat, 29–32, 406–407 adversary capability, 29–30 attack vectors, 31–32, 412–413 impact, 32 probability, 32 total attack surface, 31, 684 Modicon, 118 Mojo Networks AirTight WIPS, 475 monitoring, 230–232, 465. See also log review cloud computing, 136 continuous, 414, 569–570 file systems, 339–340 vulnerabilities in, 166 MOUs (memorandum of understanding), 62, 538, 670 movie DRM (digital rights management), 520 MPLS (Multiprotocol Label Switching), 196 MSAB XRY, 494 MS-CHAP v1, 279–281 MS-CHAP v2, 279–281 MTBF (mean time between failures), 455, 670 MTD (maximum tolerable downtime), 455, 670 MTTR (mean time to repair), 455, 670 multifactor authentication. See MFA (multifactor authentication)

multihomed firewalls, 190–191, 671 multilevel security mode (MAC), 229 Multimedia Messaging Service (MMS), 103 multipartite viruses, 324 Multiprotocol Label Switching (MPLS), 196 music DRM (digital rights management), 520 mutation fuzzing, 75

N NAC (network access control), 387–389, 671. See also identity and access management National Institute of Standards and Technology. See NIST (National Institute of Standards and Technology) nation-state threat actors, 12 natural threats, 10 NBT-NS (NetBIOS Name Service), 82 NDAs (nondisclosure agreements), 228, 436, 443, 508, 516 near field communication (NFC), 101, 671 Nessus Network Monitor, 43 Nessus Professional, 43, 71, 671 NetBIOS Name Service (NBT-NS), 82 NetFlow, 24, 342–346, 671 NetScanTools Pro, 43 network access control (NAC), 387–389, 671. See also identity and access management network architecture, 185–200 firewalls. See firewalls physical, 186–192 SDN (software-defined networking), 193–194, 681 segmentation physical, 180–181

virtual, 182–183 serverless, 200 VPC (virtual private cloud), 195 VPNs (virtual private networks), 196–199 definition of, 195 IPsec, 197–199 remote-access, 196 site-to-site, 196 SSL/TLS, 199, 681 VPN concentrators, 196 network authentication protocols, 279–280 network interface cards (NICs), 58 network security analysis, 342–345 DGA (domain generation algorithm), 343, 662 digital forensics, 488–490 tcpdump, 490, 683 Wireshark, 488–490 DNS (domain name system) analysis, 342–343 flow analysis, 345, 664 intelligent networks, 427 IOCs (indicators of compromise), 472–476 bandwidth consumption, 472 beaconing, 473, 656 common protocol over non-standard port, 476 definition of, 667 peer-to-peer (P2P) communication, 473–474 rogue devices on network, 475, 678 scans/sweeps, 476

traffic spikes, 476 NetFlow analysis, 342–346 network capture tools, 394 network data loss prevention (DLP), 386 NVT ( network vulnerability tests), 71 packet analysis, 342–343, 673 protocol analysis, 343, 675 URL (uniform resource locator) analysis, 342 network-based IDSs (NIDs), 58 never execute (XN) bit, 307 next-generation firewalls (NGFWs), 383–384, 671 NFC (near field communication), 101, 671 NGFWs (next-generation firewalls), 383–384, 671 NICs (network interface cards), 58 NIDSs (network-based IDSs), 58 Nikto, 70, 671 NIST (National Institute of Standards and Technology), 427, 552–553 NIST 800–57, 671 NIST 800–128, 671 NIST Cybersecurity Framework version 1.1, 555–556, 671 NIST SP 800–53, 31, 671 NIST SP 800–128, 322–323 NIST SP 800–137, 232 NIST SP 800–163 Rev 1, 258–259 Nmap, 76–79, 671 Node js, 423, 671 no-execute (NX) bit, 307 nondisclosure agreements (NDAs), 228, 436, 443, 508 nonessential resources, 456

non-hostile threat actors, 30 nonremovable storage, 99 non-repudiation, 233 Nook, 521 NOP (no-operation) slide, 147–149 normal resources, 456 note-taking, 581 notifications, push, 100, 675 null scans, 77, 671 numeric passwords, 565, 672 NVT (network vulnerability tests), 71 NX (no-execute) bit, 307

O Oakley, 198 OASIS (Organization for the Advancement of Structured Information Standards), 8, 220 objects definition of, 210 references to, 163, 667 tracking and containment technologies, 179–180 oclHashcat, 86, 672 OCR (Office of Civil Rights), 55, 511 OEM (original equipment manufacturer) documentation, 305, 543 /OFFBOOTDIR switch (SFC), 341 /OFFFWINDIR switch (SFC), 341 Office of Civil Rights (OCR), 55, 511 Office of Cybersecurity and Communications (CS&C), 8 Off-the-Record (OTR) Messaging, 435 OllyDbg, 329

Omnipeek, 394 one-time passwords (OTPs), 565, 672 one-to-many rules, 363 one-way hashes, 238–239 Online Certificate Status Protocol (OCSP), 244, 672 online testing, 580 The Open Group Architecture Framework (TOGAF), 554, 683 Open Indicators of Compromise (OpenIOC), 9, 672 open message format, 236 Open Source Security Information Management (OSSIM), 365 Open Web Application Security Project (OWASP), 69, 136, 406 OpenID, 222–223, 672 OpenIOC (Open Indicators of Compromise), 9, 672 open-source intelligence (OSINT), 6, 672 OpenVAS, 43, 50, 71–72, 672 operational controls, 571, 672 operational threats, 10 Oracle Cloud Infrastructure, 87 Oracle VM VirtualBox, 203 orchestration, workflow, 422–423 Organization for the Advancement of Structured Information Standards (OASIS), 8, 220 organizational governance, 62, 539, 672 organized crime threat actors, 12, 405 original equipment manufacturer (OEM) documentation, 305, 543 OS (operating system) digital forensics for, 499 process behavior, 479 OSCP (Online Certificate Status Protocol), 244, 672 OSINT (open-source intelligence), 6, 672

OSSIM (Open Source Security Information Management), 365 OTPs (one-time passwords), 565, 672 OTR (Off-the-Record) Messaging, 435 outage impact, 531 outbound communication, unexpected, 481 output encoding, 276, 672 unexpected, 480 OutputDebugString Checker, 494 overflow attacks, 147–150, 335 buffer, 147–149, 656 definition of, 672 heap, 150, 665 integer, 149–150, 667 over-the-shoulder review, 74, 274 OWASP (Open Web Application Security Project), 69, 136, 406 ownership factor authentication, 212, 213, 672 ownership policy, 508

P P2P (peer-to-peer) communication, 9, 473–474 PaaS (Platform as a Service), 127, 674 packet analysis, 342–343, 673 packet-filtering firewalls, 59, 385 Pacu, 87–88, 673 pair programming, 74, 273 Palo Alto Networks AutoFocus threat feed, 426 PAP (Password Authentication Protocol), 279–281 parameterized queries, 285, 673 parasitic viruses, 324

parity bits, 237 passive enumeration, 82, 673 passive scans, 43–44 passive vulnerability scanners (PVSs), 43, 673 passphrase passwords, 565, 673 Password Authentication Protocol (PAP), 279–281 Password-Based Key Derivation Function 2 (PBKDF2), 134 passwords authentication period for, 566, 655 CAPTCHA, 154, 565 complexity of, 566, 658, 673 history, 566, 673 length of, 566, 673 life of, 566, 673 password-cracking utilities, 491–492, 499 policies for, 564–567 spraying, 152, 673 patching, 46, 48, 461, 673 patents, 442–443, 673 PATRIOT Act, 438, 513 pattern matching, 57 payloads, 256, 368 payment, mobile, 101–102 Payment Card Industry Data Security Standard (PCI DSS), 55– 56, 441, 673 PayPal, 102 PBKDF2 (Password-Based Key Derivation Function 2), 134 PCI DSS (Payment Card Industry Data Security Standard), 55– 56, 510, 673 PCR (platform configuration register) hash, 300

PDPs (policy decision points), 144, 674 Pearson Test Prep practice test software, 582 peer-to-peer (P2P) communication, 9, 473–474 peer-to-peer botnets, 474, 673 PEframe, 393 PEnE (Policy Enforcement Engine), 298 PEPs (policy enforcement points), 144, 674 peripheral-enabled payments, 102 Perl, 423, 673 permissions definition of, 381, 673 restoration of, 461 verification of, 48 persistent XSS (cross-site scripting), 161, 673 personal firewalls, 322 personal health information (PHI), 510 Personal Information Protection and Electronic Documents Act (PIPEDA), 512, 674 personally identifiable information (PII), 55, 436, 439–440, 508, 509, 674 personally owned, corporate-enabled (POCE) policy, 256 PeStudio, 393 PGP (Pretty Good Privacy), 134, 300 PHI (protected health information), 55, 436, 440–441, 510, 674 phishing/pharming, 335, 369–370, 674 physical access control, 106–109 devices, 107 facilities, 107–109 systems, 106–107 physical controls, 674 physical network architecture, 186–192

physical segmentation, 180–181 physical threats, 10 PIA (privacy impact assessment), 508 PII (personally identifiable information), 55, 436, 439–440, 508, 509, 674 ping sweeps, 79, 476, 674 PIPEDA (Personal Information Protection and Electronic Documents Act), 512, 674 piping, 367, 674 PKI (public key infrastructure), 198, 236, 245, 284–285, 371, 675 planning software development, 267 plans, communication, 435–436, 536–537. See also response coordination Platform as a Service (PaaS), 127, 674 platform configuration register (PCR) hash, 300 platforms, 256–266 client/server, 263 embedded systems, 105–265 firmware, 266 mobile, 256–266 application, content, and data management, 257 application wrapping, 257, 654 configuration profiles and payloads, 256 containerization, 256 COPE (corporate-owned, personally enabled) policy, 256, 659 NIST SP 800–163 Rev 1, 258–259 POCE (personally owned, corporate-enabled) policy, 256 remote wiping, 257, 677

SCEP (Simple Certificate Enrollment Protocol), 258, 681 SoC (system-on-chip), 105, 265 central security breach response, 265–266 secure booting, 265 web application, 260–262 click-jacking, 262, 657 CSRF (cross-site request forgery), 261–262, 660 maintenance hooks, 260 time-of-check/time-of-use attacks, 260, 684 PLCs (programmable logic controllers), 115, 675 PLD (programmable logic device), 105 POCE (personally owned, corporate-enabled) policy, 256 PoE (Power over Ethernet), 109 Point-to-Point Tunneling Protocol (PPTP), 197, 674 policies account management, 568–569 AUP (acceptable use policy), 563–564, 653 BYOD (bring your own device), 97–98, 656 code of conduct/ethics, 563, 658 continuous monitoring, 569–570 data classification, 411 data ownership, 508, 567 data retention, 509, 567–568 definition of, 562 Group Policy, 184 mobile, 256 password, 564–567 work product retention, 570, 687 policy decision points (PDPs), 144, 674

Policy Enforcement Engine (PEnE), 298 policy enforcement points (PEPs), 144, 674 polymorphic viruses, 324 port security mac-address command, 394 ports non-standard, common protocols over, 476 scans of, 476, 674 security, 394, 674 enabling, 394 MAC addresses, limiting, 394 sticky MAC, 394, 682 post-incident activities, 463–465 change control process, 464 evidence retention, 463 incident response plan updates, 464 incident summary reports, 464–465, 666 IOCs (indicators of compromise), 465 lessons learned reports, 463–464 monitoring, 465 Power over Ethernet (PoE), 109 PowerShell, 423 PPTP (Point-to-Point Tunneling Protocol), 197, 674 Pr (Privileges Required) metric, 27 precise methods, 386 premises-based scanning, 495–496 preparation, exam. See exam preparation process preparation, in incident response process, 452–454 documentation of procedures, 453–454 testing, 453

training, 452–453 prescriptive frameworks, 555–562 ISO 27000 Series, 556–559 ITIL, 561, 668 maturity models, 561–562, 670 CMMI (Capability Maturity Model Integration), 561, 657 definition of, 670 ISO/IEC 27001, 562 NIST Cybersecurity Framework version 1.1, 555–556 SABSA, 559–560, 679 preshared secret, 258 Pretty Good Privacy (PGP), 134, 300 preventative controls, 572, 674 Principles on Privacy (EU), 514 prioritization of risk, 537–539 engineering tradeoffs, 538–539 ISO/IEC 27001 standard, 539–541 ISO/IEC 27002 standard, 541 risk assessment matrix, 537–538 security controls, 538 privacy. See data privacy privacy impact assessment (PIA), 508 private cloud, 126, 675 private VLANs (PVLANs), 458 PrivateCore, 311 privilege management, 211 privilege elevation, 205 privilege escalation, 152 privileged accounts, 211

unauthorized privilege, 479 Privileges Required (Pr) metric, 27 proactive threat indicators (PTIs), 328 probability, 32, 535 procedures, 562. See also policies digital forensics, 497–499 EnCase Forensic, 498 forensic investigation suites, 498–499, 664 Sysinternals, 498 documentation, 453–454 process behavior, abnormalities in, 479 Process Explorer, 408, 675 process isolation, 459 processing. See secure processing processor consumption, 477 processor security extensions, 307, 675 profiles, mobile, 256 programmable logic controllers (PLCs), 115, 675 programmable logic device (PLD), 105 proprietary systems, 63, 675 proprietary/closed-source intelligence, 6, 675 protected health information (PHI), 55, 436, 440–441, 674 protocol analysis, 343, 675 protocol anomaly-based IDSs, 58 Prowler, 87, 675 proximity readers, 108, 675 proxy firewalls, 60 proxy server logs, 356–357 PRTG Network Monitor, 472 PSH flag, 76 PTIs (proactive threat indicators), 328

public cloud, 126, 675 Public Company Accounting Reform and Investor Protection Act. See SOX (Sarbanes-Oxley Act) public key infrastructure (PKI), 198, 236, 245, 284–285, 371, 675 public relations, response coordination by, 437 /PURGECACHE switch (SFC), 341 purging data, 461, 675 purpose limitation, 515 push notification services, 100, 675 PVLANs (private VLANs), 458 PVSs (passive vulnerability scanners), 43, 673 Python, 423, 676

Q QRadar, 364 qualitative risk analysis, 534, 676 Qualys, 496, 676 quantitative risk analysis, 534, 676 queries, 366–367 parameterized, 285, 673 writing, 676 piping, 367, 674 scripts, 366, 679 Sigma, 366 string searches, 366, 682

R RA (registration authority), 243, 677 race conditions, 164, 260, 676 radio frequency identification (RFID), 180, 521, 676

RADIUS (Remote Authentication Dial-in User Service), 281– 282, 389–391 RAM (random-access memory), 329 ransomware, 326, 676 RBAC (role-based access control), 224–225, 678 RC4, 235 RC5, 235 RC6, 235 read-only memory (ROM), 309, 329 real user monitoring (RUM), 69, 74, 286, 676 real-time operating systems (RTOSs), 105, 676 Reaver, 84–86, 676 reconstruction/reimaging, 460 recoverability, 532, 676 recovery, 459–462 capability and service restoration, 462 log verification, 462 patching, 461 permissions restoration, 461 priorities, identification of, 531–532 reconstruction/reimaging, 460 resource reconstitution, 462 sanitization, 460, 679 secure disposal, 460–461 time requirements, 455–456 recovery point objective (RPO), 455, 676 recovery time objective (RTO), 455, 677 red teams, 542, 677 reflective XSS (cross-site scripting), 161, 677 registration authority (RA), 243, 677

Registry/configuration tools, 393 regulatory audits/assessments, 573–574 regulatory bodies, response coordination by, 438 relationships, identification of, 210–211 release, software, 269 remediation/mitigation, 45, 459–462, 538 capability and service restoration, 462 cloud computing, 177–178 compensating controls, 47, 658 configuration baseline, 45–46, 659 hardening, 46–47, 665, 683 inhibitors to, 62–63 log verification, 462 patching, 46, 48, 461, 673 permissions restoration, 461 reconstruction/reimaging, 460 resource reconstitution, 462 risk acceptance, 47, 677 sanitization, 460, 679 secure disposal, 460–461 verification of, 47 Remote Authentication Dial-in User Service (RADIUS), 281– 282, 389–391 remote code execution, 150, 677 remote terminal units (RTUs), 115, 677 remote virtual desktops model, 207 remote wiping, 257, 677 remote-access VPNs (virtual private networks), 196 removable storage, 99 reports

incident summary, 464–465, 666 lessons learned, 463–464, 669 reporting requirements, 436 RTR (Root of Trust for Reporting), 298 SOC (Service Organization Control) reports, 574, 681 REpresentational State Transfer (REST), 131, 677 reputational scores, 24 requirements gathering, 267 requirements stage, intelligence life cycle, 13 research, threat, 23–29. See also IOCs (indicators of compromise) behavioral analysis, 24–25 reputational scores, 24 resources critical, 531 function criticality levels, 456 identification of, 210 reconstitution of, 462 requirements for, 531 Responder, 82, 677 response coordination, 436–438 human resources, 437 internal versus external, 437 law enforcement, 437–438 legal, 436–437 public relations, 437 regulatory bodies, 438 senior leadership, 438 responsive controls, 677

REST (REpresentational State Transfer), 131, 288, 677 restoration of capabilities and services, 462 of permissions, 461 of resources, 462 retention standards, 510 reverse engineering, 75, 327–329, 457 definition of, 327, 677 isolation/sandboxing, 327, 668 software/malware, 327–328 tools for, 328–329 /REVERT switch (SFC), 341 RFID (radio frequency identification), 180, 676 RFProtect, 475 risk, 29. See also threat intelligence acceptance of, 47, 538, 677 assessment of, 532–534 definition of, 677 goals of, 532–533 metrics, 533 qualitative risk analysis, 534, 676 quantitative risk analysis, 534, 676 risk assessment matrix, 537–538 avoidance of, 47, 538, 678 BIA (business impact analysis), 530–532 critical processes and resources, 531 definition of, 657 outage impact and downtime, 531 recovery priorities, 531–532

resource requirements, 531 calculation of, 534–535 cloud computing, 177 communication of risk factors, 536–537 documented compensating controls, 541–542 mitigation of. See remediation/mitigation overview of, 33 prioritization of, 537–539 engineering tradeoffs, 538–539 ISO/IEC 27001 standard, 539–541 ISO/IEC 27002 standard, 541 risk assessment matrix, 537–538 security controls, 538 of scans/sweeps, 49–62 supply chain assessment, 543–544 hardware source authenticity, 544 vendor due diligence, 543 systems assessment, 539–541 training and exercises, 542–543 transfer of, 47, 538, 678 risk-based frameworks, 552–554 COBIT, 553, 657 NIST SP 800–55 Rev 1, 552–553 TOGAF (The Open Group Architecture Framework), 554 rogue access points, 336, 678 rogue devices, 475, 678 rogue endpoints, 336 role-based access control (RBAC), 224–225, 678 ROM (read-only memory), 309, 329

rooting, 100, 678 rootkits, 159–160, 326, 678 RoTs (Roots of Trust), 298–299 definition of, 678 HSM (hardware security module), 302 microSD HSM (hardware security module), 302–303 TPM (Trusted Platform Module), 299–300 VTPM (virtual Trusted Platform Module), 300–301 RPO (recovery point objective), 455, 676 RSA, 236, 371, 387 RST flag, 76 RTI (Root of Trust for Integrity), 298 RTM (Root of Trust for Measurement), 298 RTO (recovery time objective), 455, 677 RTOSs (real-time operating systems), 105, 676 RTR (Root of Trust for Reporting), 298 RTS (Root of Trust for Storage), 298 RTUs (remote terminal units), 115, 677 RTV (Root of Trust for Verification), 298 Ruby, 423, 678 rules configuration of, 386 rule-based IDSs, 58 SIEM (security information and event management) system, 362–363 writing, 392 RUM (real user monitoring), 69, 74, 286, 676 runtime data integrity check, 330, 678 runtime debugging, 332, 493–494, 660, 678

S S (Scope) metric, 27 SaaS (Software as a Service), 21, 71, 127, 495, 681 SABSA framework, 559–560, 679 safe harbor, 514 Safe Harbor Privacy Principles, 514 Safe Mode, 477 SafeBack Version 2.0, 393 safeguards, 47 SAML (Security Assertion Markup Language), 221–222, 287, 680 Samsung eFuse, 303 sandbox tools, 393 Sandboxie, 392 sandboxing, 327, 392–394, 668 sanitization, 460, 679 Sarbanes-Oxley Act (SOX), 55, 511, 679 SAS (Statement on Auditing Standards), 573 SCADA (Supervisory Control and Data Acquisition), 114–117 /SCANBOOT switch (SFC), 341 /SCANFILE switch (SFC), 341 /SCANNOW switch (SFC), 341 /SCANONCE switch (SFC), 341 scans/sweeps, 49–62, 476 active versus passing, 43–44 cloud-based, 495–496 credentialed versus non-credentialed, 51, 660, 671, 685 criteria for, 53–62 data types, 53 regulatory requirements, 55–56

segmentation, 56 sensitivity levels, 54 technical constraints, 53 workflow, 53 firewalls, 59–62 architecture of, 61–62, 188–192 types of, 59–61 HIDSs (host-based IDSs), 58 host scanning, 79 hping, 80–82 IDSs (intrusion prevention systems), 57–58 infrastructure vulnerability, 71–496 internal versus external, 53, 663, 667 NIDSs (network-based IDSs), 58 Nmap, 76–79 null, 77, 671 ping, 79, 674 port, 476, 674 regulatory requirements, 55–56 risks associated with, 49–62 scope, 49–50 scope of, 679 server-based versus agent-based, 52 verification of, 48 vulnerability feeds, 49, 686 web application, 69–70 Arachni, 70–496, 654 Burp Suite, 69, 656

Nessus Professional, 71 Nikto, 70, 671 OpenVAS, 71–72 OWASP Zed Attack Proxy (ZAP), 69 Qualys, 496, 676 types of, 69 SCAP (Security Content Automation Protocol), 44, 49, 426– 427, 680, 682 SCEP (Simple Certificate Enrollment Protocol), 258, 681 scheduled tasks, 480 Schneider Electric, 118 scientific method, 404–405 SCOM (System Center Operations Manager), 69, 74, 286 scope of incidents, 455 of scans, 49–50, 679 Scope (S) metric, 27 ScoutSuite, 87, 679 SCP (Secure Copy Protocol), 199 screened host firewalls, 192, 679 screened subnets, 62, 679 screensavers, 276 script viruses, 324 scripting, 366, 423, 679 scrypt, 134 SCT (Security Compliance Toolkit), 570 SD Elements, 407 SDLC (software development life cycle), 72–73, 267–270, 681 SDN (software-defined networking), 193–194, 681 SDS (software-defined storage), 194 sealing, 299

searches, string, 366, 682 secret data, 412 Secure Boot, 303, 310–311, 679 Secure Copy Protocol (SCP), 199 secure enclave, 307, 679 Secure European System for Applications in a Multivendor Environment (SESAME), 219, 679 Secure Hash Algorithm (SHA), 240, 499 Secure HTTP (S-HTTP), 241–242 secure message format, 236 secure processing atomic execution, 307 definition of, 305, 679 processor security extensions, 307, 675 secure enclave, 307, 679 TE (Trusted Execution), 305 Secure Shell (SSH), 183, 242, 679 Secure Sockets Layer (SSL)/Transport Layer Security (TLS), 199, 241, 681 Secure View 4, 494 secured memory, 330, 680 securiCAD, 407 Security Assertion Markup Language (SAML), 221–222, 287, 680 security awareness training, 452–453 Security Compliance Toolkit (SCT), 570 Security Content Automation Protocol (SCAP), 44, 49, 426– 427, 680, 682 security controls, 538–539 security engineering, 33, 680

security information and event management system. See SIEM (security information and event management) system security level classification, 455 security parameter index (SPI), 198 security regression testing, 273, 680 SecurStar DriveCrypt, 300 segmentation, 180–185, 458–459 definition of, 680 jumpboxes, 183–184, 668 physical, 180–181 scans, 56 system isolation, 184–185 virtual, 182–183 self-encrypting drives, 308 Sender Policy Framework (SPF), 369, 680 senior leadership, response coordination by, 438 sensitive personal information (SPI), 441, 680 sensitivity of data, 165, 411, 412, 439, 680 sensors, 111, 115 server-based application virtualization, 208 server-based scans, 52 serverless architecture, 128–129 servers authentication, 281, 655 802.1X, 389 RADIUS (Remote Authentication Dial-in User Service), 389–391 TACACS+ (Terminal Access Controller Access Control System Plus), 389–391 exploit techniques, 337–338

proxy, 356–357 service interruption, 481 Service Organization Control (SOC) reports, 574 Service Provisioning Markup Language (SPML), 220, 680 service-level agreements (SLAs), 62, 515, 539, 680 service-oriented architecture (SOA), 287, 680 services cloud service models, 127–128 exploit techniques, 338–339 push notification, 100 restoration of, 462 SESAME, 219, 679 session hijacking, 158, 681 session keys, 234 session management, 276–277 SFC (System File Checker), 340–341, 479 SFC command, 340–341 SGX (Software Guard Extensions), 131 SHA (Secure Hash Algorithm), 240, 371, 499 “sheep dip” computers, 393 Shibboleth, 224, 681 Short Message Service (SMS), 103 shoulder surfing, 336 S-HTTP (Secure HTTP), 241–242 side-channel attacks, 106 SIEM (security information and event management) system, 48, 166, 361–365, 426, 458 agent-based collection, 362 agentless collection, 362 dashboard, 363–365

definition of, 680 known-bad Internet Protocol, 363 rule writing, 362–363 Sigma, 366 signatures digital, 245–246, 371, 661 malware, 391–392 signature blocks, 372 signature-based IDSs, 57 Silent Runners.vbs, 393 Simple Certificate Enrollment Protocol (SCEP), 258, 681 Simple Object Access Protocol (SOAP), 131, 220, 287, 681 single event rules, 363 single loss expectancy (SLE), 534, 681 single sign-on (SSO), 214–217 advantages and disadvantages of, 214–215 definition of, 681 Kerberos, 215–217 sinkholing, 391, 681 site accreditation. See accreditation site-to-site VPNs (virtual private networks), 196 Skylake, 131 SLA (service-level agreement), 62, 515, 539, 680 SLE (single loss expectancy), 534, 681 smart cards, 213 smart cities, 104. See also IoT (Internet of Things) smart homes, 104. See also IoT (Internet of Things) SMS (Short Message Service), 103 snooping, DHCP, 154, 661 Snort, 359

SOA (service-oriented architecture), 287, 680 SOAP (Simple Object Access Protocol), 131, 220, 287, 681 SOC (Service Organization Control) reports, 574, 681 SoC (system-on-chip), 105, 265 central security breach response, 265–266 definition of, 683 secure booting, 265 SOCKS firewall, 60 Software as a Service (SaaS), 21, 71, 127, 495, 681 software assessment methods, 72–76, 272–275 code review, 273–274, 275, 286–287 dynamic analysis, 74, 286, 662 fuzzing, 75–76, 665 reverse engineering, 75 SDLC (software development life cycle), 72–76 security regression, 273, 680 security testing, 274–275 static analysis, 73–74, 286, 682 stress testing, 272–273 user acceptance testing, 272, 685 software assurance. See also software assessment methods DevOps, 270–272 DevSecOps, 270–272 dynamic analysis, 286 microservices, 288–289, 670 platforms, 256–266 client/server, 263 embedded systems, 105–265 firmware, 266

mobile, 256–266 SoC (system-on-chip), 105, 265 web application, 260–262 REST (REpresentational State Transfer), 288 SAML (Security Assertion Markup Language), 287 SDLC (software development life cycle), 267–270 secure coding, 275–285 authentication, 277–285 data protection, 285 input validation, 275–276, 382 output encoding, 276 parameterized queries, 285 session management, 276–277 SOA (service-oriented architecture), 287, 680 SOAP (Simple Object Access Protocol), 287 unauthorized software, 477–478 software development life cycle (SDLC), 72–73, 267–270, 681 Software Guard Extensions (SGX), 131 Software Verify, OutputDebugString Checker, 494 software-defined networking (SDN), 193–194, 681 software-defined storage (SDS), 194 softwareverify, 332 Sophos SafeGuard, 300 Sourcefire, 358 source/subscriber model, 9 sovereignty, 514–515, 660 SOX (Sarbanes-Oxley Act), 55, 511, 679 spam, 370 spear phishing, 22, 369

SPF (Sender Policy Framework), 369, 680 SPI (security parameter index), 198 SPI (sensitive personal information), 441, 680 Splunk, 364 SPML (Service Provisioning Markup Language), 220, 680 spoofing ARP, 154 e-mail, 368 switch, 156–158 sprawl, VM, 204 spyware, 325, 681 SQL (Structured Query Language) injection, 145–146, 682 SRK (storage root key), 300 SSAE (Statement on Standards for Attestation Engagements), 573 SSH (Secure Shell), 183, 242, 679 SSL (Secure Sockets Layer)/TLS (Transport Layer Security), 199, 241, 681 SSO (single sign-on), 214–217 advantages and disadvantages of, 214–215 definition of, 681 Kerberos, 215–217 stakeholders, communication with communication plans, 435–436 response coordination, 436–438 standard word passwords, 564, 682 state sponsors, 12, 405 stateful firewalls, 59 stateful matching, 57 Statement on Auditing Standards (SAS), 573

Statement on Standards for Attestation Engagements (SSAE), 573 static analysis, 73–74, 286, 682 static passwords, 564, 682 statistical anomaly-based IDSs, 58 stealth viruses, 324 steganography, 510 step-up authentication, 277 sticky keyword, 394 sticky MAC, 394, 682 STIX (Structured Threat Information eXpression), 8, 682 storage. See also cloud computing nonremovable, 99 removable, 99 RTS (Root of Trust for Storage), 298 SDS (software-defined storage), 194 uncontrolled, 99 vulnerabilities with, 99–100 storage keys, 300 storage root key (SRK), 300 strcpy function, 168, 682 stream-based ciphers, 234–235, 682 stress testing, 272–273, 682 stretching, key, 134 string searches, 366, 682 Structured Query Language (SQL) injection, 145–146, 682 Structured Threat Information eXpression (STIX), 8, 682 study trackers, 580 Stuxnet virus, 115 subnets, screened, 62, 679 sudo command, 81

Supervisory Control and Data Acquisition (SCADA), 114–117 supplicants, 281, 389, 682 supply chain assessment, 543–544 hardware source authenticity, 544 vendor due diligence, 543 Susteen Secure View 4, 494 swatch, 166 sweeps. See scans/sweeps switches rogue, 475 spoofing, 156–158 switchport mode access command, 157 switchport mode trunk command, 157 switchport port security command, 394 switchport port security maximum 2 command, 394 switchport port security violation restrict command, 394 Symantec Endpoint Protection, 387 symmetric algorithms, 233–236, 682 block ciphers, 235–236, 656 stream-based ciphers, 234–235, 682 SYN flag, 76 SYN flood, 80, 490, 682 synthetic transaction monitoring, 69, 74, 286, 682 Sysinternals, 408, 498, 683 syslog, 350–352 Syslog Server (Kiwi), 352 system apps, 98 system behavior, 333–339 anomalous behavior, 334–335 exploit techniques, 335–339

file system, 339–340 rogue access points, 336, 678 rogue endpoints, 336 servers, 337–338 services, 338–339 social engineering, 335–336 known-good behavior, 333–334 System Center Operations Manager (SCOM), 69, 74, 286 System File Checker (SFC), 340–341, 479 system hardening, 410 system high security mode (MAC), 228 system isolation, 184–185 system lockdown, 410 system process criticality, 457 system-on-chip. See SoC (system-on-chip) systems assessment, 539–541 Systems Manager, 98

T tables, memory GPT (GUID partition table), 303 how to use, 582 tabletop exercises, 543, 683 TACACS+ (Terminal Access Controller Access Control System Plus), 281–282, 389–391 tagging assets, 178, 654 taint analysis, 73 Task Manager, 407, 478 tasks, unauthorized, 480

TAXII (Trusted Automated eXchange of Indicator Information), 8–9, 684 tcpdump, 490, 683 TE (Trusted Execution), 305 teams, hunt, 247, 666 technical controls, 516–521, 571, 683 technical threats, 10 telemetry system, 115, 683 TEMPEST, 337 Tenable PVS, 43 Terminal Access Controller Access Control System Plus (TACACS+), 281–282, 389–391 terminal services, 208 terrorist group threat actors, 12, 405 test data method, 269 test preparation. See exam preparation process testing, 274–275, 453 security regression, 273, 680 stress, 272–273, 682 test data method, 269 user acceptance, 272, 685 text messaging, 103 TGT (ticket-granting ticket), 218 threat actors categories of, 9–10, 12–13 definition of, 12, 683 hostile versus non-hostile, 30 identification of, 405–406 internal versus external, 29–30 threat classification, 9–11

APTs (advanced persistent threats), 11, 653 known threats, 10, 669 unknown threats, 10, 685 zero-day vulnerabilities, 10–11, 687 threat feed, 426, 683 threat hunting. See also threat actors attack surface area, reduction of, 409–410 configuration lockdown, 410, 659 system hardening, 410 attack vectors, 412–413 critical assets, bundling, 411–412 commercial business classifications, 411 data classification policy, 411 distribution of critical assets, 412 military and government classifications, 412 sensitivity and criticality, 411 detection capabilities, improvement of, 413–414 hypotheses, 404–405 integrated intelligence, 413, 667 tactics for, 406–409 executable process analysis, 407–408, 663 hunt teaming, 406 memory consumption, 409 threat models, 406–407 threat intelligence. See also attacks; vulnerability management attack frameworks definition of, 21, 655 Diamond Model of Intrusion Analysis, 22–23, 661

kill chain, 23, 669 MITRE ATT&CK, 21–22, 670 definition of, 683 intelligence sources, 6–7, 683 accuracy of, 7 confidence levels for, 7, 659 intelligent networks, 427 OSINT (open-source intelligence), 6, 672 proprietary/closed-source intelligence, 6, 675 relevance of, 7 timeliness of, 7, 684 sharing, 33–34 threat modeling, 29–32, 683 adversary capability, 29–30 attack vectors, 31–32, 412–413 impact, 32 probability, 32 total attack surface, 31, 684 threat research, 23–29. See also IOCs (indicators of compromise) behavioral analysis, 24–25 CVSS (Common Vulnerability Scoring System), 25–29, 44, 412 reputational scores, 24 Threat Modeling Tool, 406 ThreatConnect, 426 ThreatModeler, 406 ThreatQuotient, 426 throughput rate, 282

ticket-granting ticket (TGT), 218 timeliness, 7, 684 time-of-check/time-of-use attacks, 260, 684 TLP (Traffic Light Protocol), 25 TLS (Transport Layer Security), 117, 199, 241, 681 TOGAF (The Open Group Architecture Framework), 554, 683 token devices, 213 tokenization, 517, 684 tool-assisted review, 74, 274 top secret data, 412 total attack surface, 31, 684 TPM (Trusted Platform Module), 299–300, 684 tracking rules, 363 trade secrets, 443, 684 trademarks, 443, 684 traditional botnets, 473, 684 traffic spikes in, 476 traffic anomaly-based IDSs, 58 Traffic Light Protocol (TLP), 25 training/education, 452–453, 542–543 transfer of risk, 47, 538, 678 transitive rules, 363 transport encryption, 240–242 Transport Layer Security (TLS), 117, 199, 241, 681 trapdoors, 338, 656 trend analysis, 320, 684 Trend Micro Maximum Security, 300 trending rules, 363 Tripwire, 340 Trojan horses, 325, 684

true negatives, 44, 684 true positives, 44, 684 Trusted Automated eXchange of Indicator Information (TAXII), 8–9, 684 Trusted Execution (TE), 305 trusted firmware updates, 308–309 attestation, 300, 310–311, 655 IMA (Integrity Measurement Architecture), 311 measured boot, 310–311, 670 measured launch, 311 Trusted Foundry program, 304–305, 544 Trusted Platform Module (TPM), 299–300, 684 trusted relationships, 22 trusted third-party federation model, 219 Twofish, 235 Type 1 hypervisors, 203, 684 Type 2 hypervisors, 203, 684

U UAT (user acceptance testing), 272 UEBA (user and entity behavior analytics), 24, 341, 685 UEFI (Unified Extensible Firmware Interface), 303–304, 685 UI (User Interaction) metric, 27 unauthorized access, 183 unauthorized changes, 479 unauthorized privilege, 479 unauthorized scheduled tasks, 480 unauthorized software, 477–478 unclassified data, 412 uncontrolled storage, 99 uncredentialed scans, 476, 685

Unicode, 276 Unified Extensible Firmware Interface (UEFI), 303–304, 685 unified threat management (UTM), 383 uniform resource locators. See URLs (uniform resource locators) unintentional insider threats, 13 United States Federal Sentencing Guidelines, 512, 685 Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism. See USA PATRIOT Act universal serial bus (USB), 102 unknown threats, 10, 685 unsigned apps, 98 updates exam, 651–652 trusted firmware, 308–309 URG flag, 76 urgent resources, 456 URLEncode, 261 URLs (uniform resource locators) analysis of, 342 encoding of, 276 U.S. Government Configuration Baseline (USGCB), 323 USA PATRIOT Act, 438, 513, 685 USB (universal serial bus), 102 USB OTG (USB On-The-Go), 99–100, 685 user acceptance testing, 272, 685 user and entity behavior analytics (UEBA), 24, 341, 685 user identification, 210 User Interaction (UI) metric, 27 usermode debugger, 457, 685

V Valgrind, 329 VDI (virtual desktop infrastructure), 207, 686 vectors, attack, 31–32, 412–413 vehicles, 111–113 CAN (Controller Area Network) bus, 112, 659 drones, 113 vendor due diligence, 543 verification testing, 269 /VERIFYFILE switch (SFC), 341 /VERIFYONLY switch (SFC), 341 Verisign, 244 vertical privilege escalation, 152 Vetting the Security of Mobile Applications, 258–259 video game DRM (digital rights management), 520 video systems, IP, 109–111 virtual SAN, 686 virtual TPM, 686 virtualization advantages and disadvantages of, 201–202 application streaming, 208 attacks and vulnerabilities, 203–206 digital forensics for, 497 hypervisors, 202–203 management interface, 205 terminal services, 208 VDI (virtual desktop infrastructure), 207, 686 virtual private networks. See VPNs (virtual private networks) virtual SAN, 686

virtual segmentation, 182–183 virtual TPM, 686 VLANs (virtual LANs), 156–158, 182–183, 458 VMs (virtual machines) attacks and vulnerabilities, 201–204 digital forensics for, 497 VPC (virtual private cloud), 195, 686 VPNs (virtual private networks), 196–199 definition of, 195, 686 IPsec, 197–199 remote-access, 196 site-to-site, 196 SSL/TLS, 199, 681 VPN concentrators, 196 VSAN (virtual storage area network), 194 VTPM (virtual Trusted Platform Module), 300–301 viruses, 115, 323–324, 686 VLANs (virtual LANs), 182–183, 458 advantages and disadvantages of, 156 VLAN-based attacks, 156–158 VMs (virtual machines) attacks and vulnerabilities, 201–204 digital forensics for, 497 VMware, 311 VMware vSphere, 203 VMware Workstation, 203 volatile memory, 329 VPC (virtual private cloud), 195, 686 VPNs (virtual private networks), 196–199

definition of, 195, 686 IPsec, 197–199 remote-access, 196 site-to-site, 196 SSL/TLS, 199, 681 VPN concentrators, 196 VSAN (virtual storage area network), 194 vSphere, 203 VTPM (virtual Trusted Platform Module), 300–301 vulnerability assessment output. See also vulnerability management cloud infrastructure assessment tools, 86–88 Pacu, 87–88, 673 Prowler, 87, 675 ScoutSuite, 87, 679 enumeration, 76–82 active versus passive, 82, 653, 673 definition of, 76 host scanning, 79, 666 hping, 80–82 Nmap, 76–79, 671 Responder, 82, 677 infrastructure vulnerability scanners, 71–496 software assessment tools, 72–76 dynamic analysis, 74, 286, 662 fuzzing, 75–76, 665 reverse engineering, 75 SDLC (software development life cycle), 72–73 static analysis, 73–74, 286, 682

web application scanners, 69–70 wireless assessment tools, 82–86 Aircrack-ng, 83, 654 oclHashcat, 86, 672 Reaver, 84–86, 676 vulnerability feeds, 49, 686 vulnerability management. See also data analysis definition of, 686 firewalls. See firewalls identification, 41–44 active versus passing scanning, 43–44 assessment goals, 41–42 asset criticality, 42–43, 654 mapping and enumeration, 44 overview of, 33 remediation/mitigation, 45 compensating controls, 47, 658 configuration baseline, 45–46, 659 hardening, 46–47, 665, 683 inhibitors to, 62–63 patching, 46, 48, 673 risk acceptance, 47, 677 verification of, 47 scans/sweeps, 49–62, 476 cloud-based, 495–496 credentialed versus non-credentialed, 51, 660 criteria for, 53–62 internal versus external, 53, 663, 667

risks associated with, 49–62 scope, 49–50 server-based versus agent-based, 52 verification of, 48 vulnerability feeds, 49, 686 for specialized technology automation systems, 109 embedded systems, 105–264, 663 FGPA (field programmable gate array), 105–106 HVAC controllers, 111 ICS (Incident Command System), 114 IoT (Internet of Things), 103–104, 668 IP video systems, 109–111 mobile devices, 97–103 Modbus, 117, 118, 670 physical access control, 106–109 RTOSs (real-time operating systems), 105, 676 SCADA (Supervisory Control and Data Acquisition), 114– 117 sensors, 111 SoC (system-on-chip), 105, 265–266, 683 vehicles and drones, 111–113 workflow and process automation systemsworkflow and process automation systems, 113 validation, 44–48 virtualization, 203–206 vulnerability assessment output cloud infrastructure assessment tools, 86–88

enumeration, 76–82 infrastructure vulnerability scanner, 71–496 software assessment tools, 72–76 web application scanners, 69–70 wireless assessment tools, 82–86 vulnerability types, 163–168 broken authentication, 164–165 code reuse, 166 dereferencing, 163, 661 improper error handling, 163 insecure components, 165–166 insecure functions, 168 insecure object reference, 163, 667 insufficient logging and monitoring, 166 race conditions, 164, 676 sensitive data exposure, 165 weak or default configurations, 167–168 zero-day vulnerability, 269 vulnerability mitigation. See remediation/mitigation

W WAF (web application firewall), 355–356, 686 war game exercises, 542–543 wash command, 85–86 watermarking, 521, 661 web application firewall (WAF), 355–356, 686 web application platforms, 260–262 click-jacking, 262, 657 CSRF (cross-site request forgery), 261–262, 660

maintenance hooks, 260 time-of-check/time-of-use attacks, 260, 684 web application scanners, 69–70 Arachni, 70–496, 654 Burp Suite, 69, 656 Nessus Professional, 71 Nikto, 70, 671 OpenVAS, 71–72 OWASP Zed Attack Proxy (ZAP), 69 Qualys, 496, 676 types of, 69 web vulnerability scanners, 69, 686 whaling, 370 white hats, 406 white teams, 543, 686 white-box testing, 274–275 whitelisting, 275, 381, 687 Wi-Fi hacking gear, 475 Wi-Fi Protected Access (WPA), 134 Windows computers DPAPI (Data Protection API), 131, 660 Group Policy, 45, 184, 381, 570 least privilege, principle of, 338 Measured Boot, 311 Process Explorer, 408, 675 Secure Boot, 310–311 SFC (System File Checker), 340 Task Manager, 407 Windows Server managed service accounts, 339

Windows Defender, 353 Windows PowerShell, 423 Winload (Windows Boot Loader), 310 wiping, remote, 257, 677 WIPO (World Intellectual Property Organization), 444 WIPS (a wireless intrusion prevention system), 475 WIPS (wireless intrusion prevention system), 336, 687 wireless assessment tools, 82–86 Aircrack-ng, 83, 654 oclHashcat, 86, 672 Reaver, 84–86, 676 wireless intrusion prevention system (WIPS), 336, 475, 687 wireless key loggers, 475, 687 Wireshark, 394, 488–490, 687 work product retention policy, 570, 687 work recovery time (WRT), 455, 687 workflow automation systems for, 113 orchestration of, 422–423, 687 scans and, 53 World Intellectual Property Organization (WIPO), 444 worms, 324, 687 WPA (Wi-Fi Protected Access), 134 WRT (work recovery time), 455, 687

X X.509 certificates, 243–244 XACML (Extensible Access Control Markup Language), 143– 144, 220, 663 XenServer, 203 XMAS scans, 78–79, 687

XML (Extensible Markup Language) attacks, 143–144, 663 XN (never execute) bit, 307 XRY, 494 XSS (cross-site scripting), 160–162 definition of, 660 DOM (document object model), 162, 662 example of, 160–161 persistent, 161, 673 reflective, 161, 677

Y-Z ZAP, 687 Zebra Technologies AirDefense, 475 Zed Attack Proxy (ZAP), 69 Zeek, 360 Zero Knowledge Proof, 236 zero-day vulnerability, 10–11, 269, 320, 687 zero-knowledge testing, 274–275 zombies, 325

To receive your 10% off Exam Voucher, register your product at: www.pearsonitcertification.com/regis ter

and follow the instructions.

Appendix C

Memory Tables CHAPTER 3 Table 3-2 Server-Based vs. Agent-Based Scanning

Type

Technology

Characteristics

Agent based

Server based

CHAPTER 8 Table 8-2 Advantages and Disadvantages of SSL/TLS

Advantages

Disadvantages

Table 8-3 Examples of Logging Configuration Settings

Cate gory

Low-Impact System

Log retention duration

Log rotation

Log data transfer frequency (to SIEM) Local log data analysis File integrity check for rotated logs? Encrypt rotated logs? Encrypt log data transfers to

ModerateImpact System

High-Impact System

1– 2 we ek s Every 6–24 hours, or every 2–5 MB At least every 5 minutes

SIEM?

Table 8-4 Symmetric Algorithm Strengths and Weaknesses

Strengths

Weaknesses

Table 8-5 Symmetric Algorithms Key Facts

Algorit hm Name

3DES AES

IDEA

Block or Stream Cipher?

Ke y Siz e

Number of Rounds

Bloc k Size

Blowfish Twofish

RC4 RC5 RC6

Table 8-6 Asymmetric Algorithm Strengths and Weaknesses

Strengths

Weaknesses

Table 8-7 Applying Cryptography

Data Type

Data at rest

Crypto Type

Examples

Application

Data in transit

CHAPTER 9 Table 9-2 Comparing Black-Box, Gray-Box, and White-Box Testing

Black Box

Gray Box

White Box

Internal workings of the application are not known. Also called translucent testing, as the tester has partial knowledge. Performed by testers and developers. ___________ time-consuming.

Table 9-3 Authentication Protocols

Protoc ol

Advantag es

PAP CHAP

MS-CHAP v1

MS-CHAP v2

EAP-MD5 CHAP

EAP-TLS

EAP-TTLS

Disadvantag es

Guidelines/Not es

Table 9-4 RADIUS and TACACs+

RADIUS

Transport Protocol

Confidentiality

Authentication and Authorization

Supported Layer 3 Protocols

Does not support any of the following:

Devices

Does not support ___________________ __

TAC ACS +

Traffic

CHAPTER 11 Table 11-3 SFC Switches

Switch

Purpose

Sets the Windows File Protection cache size, in megabytes Purges the Windows File Protection cache and scans all protected system files immediately Reverts SFC to its default operation Scans a file that you specify and fixes problems if they are found Immediately scans all protected system files Scans all protected system files once Scans all protected system files every time the computer is rebooted Scans protected system files and does not make any repairs or changes Identifies the integrity of the file specified, and makes any repairs or changes

Does a repair of an offline boot directory Does a repair of an offline Windows directory

CHAPTER 12 Table 12-2 Advantages and Disadvantages of NGFWs

Advantages

Disadvantages

Table 12-3 Pros and Cons of Firewall Types

Type

Advantages

Best performance

Disadvantages

Cannot prevent:

IP spoofing

Attacks that are specific to an application

Attacks that depend on packet fragmentation

Attacks that take advantage of the TCP handshake

Secure addresses from exposure

Slight impact on performance

Support a multiprotocol environment

May require a client on the computer (SOCKS proxy)

Allow for comprehensive logging

No application layer security

Understand the details of the communication process at Layer 7 for the application

Big impact on performance

Inspect the packet at every layer of the OSI model

Don’t impact performance as do application layer proxies

Table 12-4 RADIUS vs. TACACS+

RADIUS

Trans port

TACACS+

Uses _______, which may result in faster response

Uses _______, which offers more

Protoc ol Confid entiali ty

information for troubleshooting Encrypts _______________ ____

Encrypts _______________ ____

Device s

_______________ ____ securing the available commands on routers and switches

_______________ ____ securing the available commands on routers and switches

Traffic

Creates _____________ traffic

Creates _____________ traffic

Authe nticati on and Autho rizatio n

Suppo rted Layer 3 Protoc ols

CHAPTER 15 Table 15-2 Control Objectives of PCI DSS

Control Objective

PCI DSS Requirements

Build and Maintain a Secure Network and Systems

Protect Cardholder Data

Maintain a Vulnerability Management Program

Implement Strong Access Control Measures

Regularly Monitor and Test Networks

Maintain an Information Security Policy

CHAPTER 21 Table 21-3 SABSA Framework Matrix

Vi e w p oi nt s

L a y e r

As set s (W ha t)

Moti vati on (Wh y)

Pr oc es s (H ow )

Pe op le (W ho )

Loc atio n (Wh ere)

Ti m e (W he n)

B u s i n e s s

T i m e d e p e n d e n c i e s C o n c e p t

S e c u ri ty d

u a l

o m ai n m o d el

Busi ness info rma tion mo del Securit y rules, practice s, and procedu res

Applica tion and user manage ment and support

Table 21-5 SOC Report Comparison Chart

Report Type

What It Reports On

Who Uses It

SOC 1

SOC 2

SOC 3

SOC for Cybers ecurity

An organization’s efforts to prevent, monitor, and effectively handle any cybersecurity threats

SOC Consul ting & Readi ness

The controls it currently has in place, while also preparing it for the actual execution of a SOC report

Manage ment and practiti oners

Appendix D

Memory Tables Answer Key CHAPTER 3 Table 3-2 Server-Based vs. Agent-Based Scanning

Type

Technology

Age nt base d

Pull techno logy

Characteristics

Can get information from disconnected machines or machines in the DMZ Ideal for remote locations that have limited bandwidth Less dependent on network connectivity Based on policies defined on the central console

Serv er base d

Push techno logy

Good for networks with plentiful bandwidth Dependent on network connectivity Central authority does all the scanning and deployment

CHAPTER 8 Table 8-2 Advantages and Disadvantages of SSL/TLS

Advantages

Disadvantages

Data is encrypted.

Encryption and decryption require heavy resource usage.

SSL/TLS is supported on all browsers.

Critical troubleshooting components (URL path, SQL queries, passed parameters) are encrypted.

Users can easily identify its use (via https://).

Table 8-3 Examples of Logging Configuration Settings

Cate gory

Low-Impact System

ModerateImpact System

High-Impact System

Log retention duration

1–2 weeks

1–3 months

3–12 months

Log rotation

Optional (if performed, at least every week or every 25 MB)

Every 6–24 hours, or every 2–5 MB

Every 15– 60 minutes, or every 0.5– 1.0 MB

Log data transfer

Every 3–24 hours

Every 15–60

At least every 5

frequenc y (to SIEM)

minutes

minutes

Local log data analysis

Every 1–7 days

Every 12–24 hours

At least 6 times a day

File integrity check for rotated logs?

Optional

Yes

Yes

Encrypt rotated logs?

Optional

Optional

Yes

Encrypt log data transfers to SIEM?

Optional

Yes

Yes

Table 8-4 Symmetric Algorithm Strengths and Weaknesses

Strengths

Weaknesses

Symmetric algorithms are 1000 to 10,000 times faster than asymmetric algorithms.

The number of unique keys needed can cause key management issues.

They are hard to break.

Secure key distribution is critical

They are cheaper to implement than asymmetric

Key compromise occurs if one party is compromised,

algorithms.

thereby allowing impersonation.

Table 8-5 Symmetric Algorithms Key Facts

Algorit hm Name

Block or Stream Cipher?

Ke y Siz e

Number of Rounds

Bloc k Size

3 D E S

B l o c k

56, 112, or 168 bits

48

64 bits

A E S

B l o c k

128, 192, or 256 bits

10, 12, or 14 (depending on block/key size)

128, 192, or 256 bits

I D E A

B l o c k

128 bits

8

64 bits

Bl o w fi sh

B l o c k

32 – 448 bits

16

64 bits

T w of

B l o

128, 192, or 256 bits

16

128 bits

is h

c k

R C 4

S t r e a m

40 to 2048 bits

Up to 256

N/A

R C 5

B l o c k

Up to 2048 bits

Up to 255

32, 64, or 128 bits

R C 6

B l o c k

Up to 2048 bits

Up to 255

32, 64, or 128 bits

Table 8-6 Asymmetric Algorithm Strengths and Weaknesses

Strengths

Weaknesses

Key distribution is easier and more manageable than with symmetric algorithms.

Asymmetric algorithms are more expensive to implement than symmetric algorithms.

Key management is easier because the same public key is used by all parties.

They are 1000 to 10,000 times slower than symmetric algorithms.

Table 8-7 Applying Cryptography

Data Type

Crypto Type

D at a at re st

DES — retir ed

Sy m me tri c ke y

Examples

Application

Storing data on hard drives, thumb drives, etc.—any application where the key can easily be shared

AES — revi sed 3DE S Blo wfis h

D at a in tr an sit

As ym me tri c ke y

RSA Diffi eHel man ECC ElG ama l DSA

CHAPTER 9

SSL/TLS key exchange hash

Table 9-2 Comparing Black-Box, Gray-Box, and White-Box Testing

Black Box

Gray Box

White Box

Internal workings of the application are not known.

Internal workings of the application are somewhat known.

Internal workings of the application are fully known.

Also called closed-box, data-driven, or functional testing.

Also called translucent testing, as the tester has partial knowledge.

Also known as clear-box, structural, or code-based testing.

Performed by end users, testers, and developers.

Performed by end users, testers, and developers.

Performed by testers and developers.

Least timeconsuming.

More time-consuming than black-box testing but less so than whitebox testing.

Most exhaustive and timeconsuming.

Table 9-3 Authentication Protocols

Protoc ol

P A P

Advantag es

Simplicity

Disadvantag es

Password sent in cleartext

Guidelines/Not es

Do not use

C H A P

No passwords are exchanged

M SC H A P v1

No passwords are exchanged

M SC H A P v2

No passwords are exchanged

Widely supported standard

Stronger password storage than CHAP

Stronger password storage than CHAP Mutual authentication

Susceptible to dictionary and bruteforce attacks

Ensure complex password s

Susceptible to dictionary and bruteforce attacks

Ensure complex password s

Supported only on Microsoft devices

If possible, use MSCHAP v2 instead

Susceptible to dictionary and bruteforce attacks

Ensure complex password s

Supported only on Microsoft devices Not supported on some legacy Microsoft clients

E A PM D 5 C

Supports passwordbased authentication Widely supported standard

Susceptible to dictionary and bruteforce attacks

Ensure complex password s

H A P E A PT L S

The most secure form of EAP; uses certificates on the server and client

E A PT T L S

As secure as EAPTLS

Widely supported standard

Only requires a certificate on the server Allows passwords on the client

Requires a PKI

No known issues

More complex to configure

Susceptible to dictionary and bruteforce attacks

Ensure complex password s

More complex to configure

Table 9-4 RADIUS and TACACs+

RADIUS

TACACS+

Trans port Protoc ol

Uses UDP, which may result in faster response

Uses TCP, which offers more information for troubleshooting

Confid entiali ty

Encrypts only the password in the access-request packet

Encrypts the entire body of the packet but leaves a standard TACACS+ header for troubleshooting

Authe nticati on and Autho rizatio n

Combines authentication and authorization

Separates authentication, authorization, and accounting processes

Suppo rted Layer 3 Protoc ols

Does not support any of the following:

Supports all protocols

NetBIOS Frame Protocol Control protocol X.25 PAD connections

Device s

Does not support securing the available commands on routers and switches

Supports securing the available commands on routers and switches

Traffi c

Creates less traffic

Creates more traffic

CHAPTER 11 Table 11-3 SFC Switches

Switch

Purpose

/CACHESIZE=X

Sets the Windows File Protection cache size, in megabytes

/PURGECACHE

Purges the Windows File Protection cache and scans all protected system files immediately

/REVERT

Reverts SFC to its default operation

/SCANFILE (Windows 7 and Vista only)

Scans a file that you specify and fixes problems if they are found

/SCANNOW

Immediately scans all protected system files

/SCANONCE

Scans all protected system files once

/SCANBOOT

Scans all protected system files every time the computer is rebooted

/VERIFYONLY

Scans protected system files and does not make any repairs or changes

/VERIFYFILE

Identifies the integrity of the file specified, and makes any repairs or changes

/OFFBOOTDIR

Does a repair of an offline boot directory

/OFFFWINDIR

Does a repair of an offline Windows directory

CHAPTER 12 Table 12-2 Advantages and Disadvantages of NGFWs

Advantages

Disadvantages

Provides enhanced security

Is more involved to manage than a standard firewall

Provides integration between security services

Leads to reliance on a single vendor

May save costs on appliances

Performance can be impacted

Table 12-3 Pros and Cons of Firewall Types

Type

Packe tfilteri ng firew alls

Advantages

Disadvantages

Best performance

Cannot prevent:

IP spoofing

Attacks that are specific to an application

Attacks that depend on packet fragmentation

Attacks that take advantage of the TCP handshake

Circui tlevel proxi es

Secure addresses from exposure

Slight impact on performance

Support a multiprotocol environment

May require a client on the computer (SOCKS proxy)

Allow for comprehensive logging

No application layer security Appli cation -level proxi es

Understand the details of the communication process at Layer 7 for the application

Big impact on performance

Kerne l proxy firew alls

Inspect the packet at every layer of the OSI model

Don’t impact performance as do application layer proxies

Table 12-4 RADIUS vs. TACACS+

RADIUS

Trans port Protoc ol

TACACS+

Uses UDP, which may result in faster response

Uses TCP, which offers more information for troubleshooting

Confid entiali ty

Encrypts only the password in the access request packet

Encrypts the entire body of the packet but leaves a standard TACACS+ header for troubleshooting

Authe nticati on and Autho rizatio n

Combines authentication and authorization

Separates authentication, authorization, and accounting processes

Suppo rted Layer 3 Protoc ols

Does not support any of the following:

Supports all protocols

Apple Remote Access protocol NetBIOS Frame Protocol Control protocol X.25 PAD connections

Device s

Does not support securing the available commands on routers and switches

Supports securing the available commands on routers and switches

Traffic

Creates less traffic

Creates more traffic

CHAPTER 15

Table 15-2 Control Objectives of PCI DSS

Control Objective

Build and Maintain a Secure Network and Systems

PCI DSS Requirements

1. Install and maintain a firewall configuration to protect cardholder data 2. Do not use vendor-supplied defaults for system passwords and other security parameters

Protect Cardholder Data

3. Protect stored cardholder data 4. Encrypt transmission of cardholder data across open, public networks

Maintain a Vulnerability Management Program

5. Protect all systems against malware and regularly update antivirus software or programs 6. Develop and maintain secure systems and applications

Implement Strong Access Control Measures

7. Restrict access to cardholder data by business need to know 8. Identify and authenticate access to system components 9. Restrict physical access to cardholder data

Regularly Monitor and Test Networks

10. Track and monitor all access to network resources and cardholder data 11. Regularly test security systems and processes

Maintain an Information Security Policy

12. Maintain a policy that addresses information security for all personnel

CHAPTER 21 Table 21-3 SABSA Framework Matrix

Vi e w p oi nt s

B u s i n e s s

L a y e r

C o n t e x t u a l

As set s (W ha t)

B u si n es s

Moti vati on (Wh y)

Ris k mo del

Pr oc es s (H ow )

Proc ess mod el

Pe op le (W ho )

Loc atio n (Wh ere)

Ti m e (W he n)

Orga nizat ions and relat ions hips

Geo grap hy

Ti me dep end enc ies

A r c h i t e c t

C o n c e p t u a l

B u si n es s at tr ib ut es p r of il e

Co ntr ol obj ecti ves

Sec urit y stra tegi es and arch itect ural laye ring

Secu rity entit y mod el and trust fram ewor k

Sec urit y dom ain mod el

Sec urit yrel ate d life tim es an d dea dli nes

D e s i g n e r

L o g i c a l

B u si n es s in fo r m at io n m o d el

Sec urit y pol icie s

Sec urit y serv ices

Entit y sche ma and privi lege profi les

Sec urit y dom ain defi niti ons and asso ciati ons

Sec urit y pro ces sin g cyc le

B u i l d e r

P h y s i c

B u si n es s d

Sec urit y rul es, pra ctic

Sec urit y mec hani sm

User s, appli catio ns, and

Plat for m and net wor k

Co ntr ol str uct ure exe

a l

at a m o d el

es, an d pro ced ure s

T r a d e s m a n

C o m p o n e n t

D et ai le d d at a st r u ct u re s

Sec urit y sta nd ard s

F a c i l i t i e s m a n a g e r

O p e r a t i o n a l

O p er at io n al c o nt in ui ty as s u ra

Op era tio n ris k ma nag em ent

inter faces

infr astr uctu re

cut ion

Sec urit y tool s and pro duct s

Iden tities , func tions , actio ns, and ACL s

Proc esse s, nod es, add ress es, and prot ocol s

Sec urit y ste p tim ing an d seq uen cin g

Sec urit y serv ice man age men t and sup port

Appl icati on and user man age men t and supp ort

Site, net wor k, and plat for m secu rity

Sec urit y ope rati ons sch edu le

n ce

Table 21-5 SOC Report Comparison Chart

Report Type

What It Reports On

Who Uses It

SOC 1

Internal controls over financial reporting

User auditors and users’ controller office

SOC 2

Security, availability, processing integrity, confidentiality, or privacy controls

Management, regulators, and others; shared under nondisclosure agreement (NDA)

SOC 3

Security, availability, processing integrity, confidentiality, or privacy controls

Publicly available to anyone

SOC for Cyb erse curit y

An organization’s efforts to prevent, monitor, and effectively handle any cybersecurity threats

Management and practitioners

SOC Con sulti ng & Rea dine ss

The controls it currently has in place, while also preparing it for the actual execution of a SOC report

Management and practitioners

Appendix E

Study Planner Practice Test

Ele men t

T a s k

Goal Date

Reading

Task

First Date Completed

Second Date Completed (Optional)

No tes

Introduction Read Introduction

1. The Importance of Threat Data and Intelligence

Read Foundation Topics

1. The Importance of Threat Data and Intelligence Review Key Topics

1. The Importance of Threat Data and Intelligence Define Key Terms

1. The Importance of Threat Data and Intelligence

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 1 in practice test software

2. Utilizing Threat Intelligence to Support Organizational Security

Read Foundation Topics

2. Utilizing Threat Intelligence to Support Organizational Security

Review Key Topics

2. Utilizing Threat Intelligence to Support Organizational Security

Define Key Terms

2. Utilizing Threat Intelligence to Support Organizational Security

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 2 in practice test software

3. Vulnerability Management Activities Read Foundation Topics

3. Vulnerability Management Activities Review Key Topics

3. Vulnerability Management Activities Complete Memory Tables

3. Vulnerability Management Activities Define Key Terms

3. Vulnerability management activities

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 3 in practice test software

4.Analyzing Assessment Output Read Foundation Topics

4.Analyzing Assessment Output Review Key Topics

4.Analyzing Assessment Output Define Key Terms

4.Analyzing Assessment Output Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 4 in practice test software

5. Threats and Vulnerabilities Associated with Specialized Technology

Read Foundation Topics

5. Threats and Vulnerabilities Associated with Specialized Technology

Review Key Topics

5. Threats and Vulnerabilities Associated with Specialized Technology

Define Key Terms

5. Threats and Vulnerabilities Associated with Complete Review Specialized Technology Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 5 in practice test software

6. Threats and Vulnerabilities Associated with Operating in the Cloud

Read Foundation Topics

6. Threats and Vulnerabilities Associated with Operating in the Cloud

Review Key Topics

6. Threats and Vulnerabilities Associated with Operating in the Cloud

Define Key Terms

6. Threats and Vulnerabilities Associated with Complete Review Operating in the Cloud Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 6 in practice test software

7. Implementing Controls to Mitigate Attacks and Software Vulnerabilities

7. Implementing Controls to Mitigate Attacks and Software Vulnerabilities

Review Key Topics

7. Implementing Controls to Mitigate Attacks and Software Vulnerabilities

Complete Memory Tables

7. Implementing Controls to Mitigate Attacks and Software Vulnerabilities

Define Key Terms

7. Implementing Controls to Mitigate Attacks and Software Vulnerabilities

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 7 in practice test software

8. Security Solutions for Infrastructure Management

8. Security Solutions for Infrastructure Management

8. Security Solutions for Infrastructure Management

Review Key Topics

Complete Memory Tables

8. Security Solutions for Infrastructure Management

Define Key Terms

8. Security Solutions for Infrastructure Management

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 8 in practice test software

9. Software Assurance Best Practices

9. Software Assurance Best Practices Review Key Topics

9. Software Assurance Best Practices Complete Memory Tables

9. Software Assurance Best Practices Define Key Terms

9. Software Assurance Best

Complete Review Questions

Practices

section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 9 in practice test software

10. Hardware Assurance Best Practices

10. Hardware Assurance Best Practices Review Key Topics

10. Hardware Assurance Best Practices Define Key Terms

10. Hardware Assurance Best Practices

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 10 in practice test software

11. Analyzing Data as Part of Security Monitoring Activities

11. Analyzing Data as Part of Security Monitoring Activities

Review Key Topics

11. Analyzing Data as Part of Security Monitoring Complete Memory Activities Tables

11. Analyzing Data as Part of Security Monitoring Activities

11. Analyzing Data as Part of Security

Define Key Terms

Complete Review

Monitoring Activities

Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 11 in practice test software

12. Implementing Configuration Changes to Existing Controls to Improve Security

12. Implementing Configuration Changes to Existing Controls to Improve Security

Review Key Topics

12. Implementing Configuration Changes to Existing Complete Controls to Improve Security Memory Tables

12. Implementing Configuration Changes to Existing Controls to Improve Security

12. Implementing Configuration Changes to Existing Controls to Improve Security

Define Key Terms

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 12 in practice test software

13. The Importance of Proactive Threat Hunting

13. The Importance of Proactive Threat Hunting Review Key Topics

13. The Importance of Proactive Threat Hunting Define Key Terms

13. The Importance of Proactive Threat Complete Review Questions Hunting section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 13 in practice test software

14. Automation Concepts and Technologies

14. Automation Concepts and Technologies Review Key Topics

14. Automation Concepts and Technologies Define Key Terms

14. Automation Concepts and Technologies

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 14 in practice test software

15. The Incident Response Process

15. The Incident Response Process Review Key Topics

15. The Incident Response Process Complete Memory Tables

15. The Incident Response Process Define Key Terms

15. The Incident Response Process Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 15 in practice test software

16. Applying the Appropriate Incident Response Procedure

16. Applying the Appropriate Incident Response Procedure

Review Key Topics

16. Applying the Appropriate Incident Response Procedure

Define Key Terms

16. Applying the Appropriate Incident Response Procedure

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 16 in practice test software

17. Analyzing Potential Indicators of Compromise

17. Analyzing Potential Indicators of Compromise Review Key Topics

17. Analyzing Potential Indicators of Compromise Define Key Terms

17. Analyzing Potential Indicators of Compromise

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 17 in practice test software

18. Utilizing Basic Digital Forensics Techniques

18. Utilizing Basic Digital Forensics Techniques Review Key Topics

18. Utilizing Basic Digital Forensics Techniques Define Key Terms

18. Utilizing Basic Digital Forensics Techniques

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 18 in practice test software

19. The Importance of Data Privacy and Protection

19. The Importance of Data Privacy and Protection Review Key Topics

19. The Importance of Data Privacy and Protection Define Key Terms

19. The Importance of Data Privacy and Complete Review Questions Protection section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 19 in practice test software

20. Applying Security Concepts in Support of Organizational Risk Mitigation

20. Applying Security Concepts in Support of Organizational Risk Mitigation

Review Key Topics

20. Applying Security Concepts in Support of Organizational Risk Mitigation

20. Applying Security Concepts in Support of Organizational Risk Mitigation

Define Key Terms

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 20 in practice test software

21 The Importance of Frameworks, Policies, Procedures, and Controls

21 The Importance of Frameworks, Policies, Procedures, and Controls

21 The Importance of Frameworks, Policies, Procedures, and Controls

21 The Importance of Frameworks, Policies, Procedures, and Controls

21 The Importance of Frameworks, Policies, Procedures, and Controls

Review Key Topics

Complete Memory Tables

Define Key Terms

Complete Review Questions section

Practic Take practice test in study mode using Exam Bank 1 questions e Test for Chapter 21 in practice test software

22. Final Preparation

22. Final Preparation

Take practice test in study mode for all book questions in practice test software

22. Final Preparation Review all Key Topics in all chapters

22. Final Take practice test in practice exam mode using Exam Preparation Bank #1 questions for all chapters

22. Final Take practice test in practice exam mode using Exam Preparation Bank #2 questions for all chapters

Where are the companion content files? Register this digital version of CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide to access important downloads. Register this eBook to unlock the companion files. Follow these steps: 1. Go to pearsonITcertification.com/account and log in or create a new account. 2. Enter the ISBN: 9780136747161 (NOTE: Please enter the print book ISBN provided to register the eBook you purchased.) 3. Answer the challenge question as proof of purchase. 4. Click on the “Access Bonus Content” link in the Registered Products section of your account page, to be taken to the page where your downloadable content is available.

This eBook version of the print title does not contain the practice test software that accompanies the print book. You May Also Like—Premium Edition eBook and Practice Test. To learn about the Premium Edition

eBook and Practice Test series, visit pearsonITcertification.com/practicetest

The Professional and Personal Technology Brands of Pearson

CompTIA Cybersecurity Analyst (CySA+) CS0-002 Cert Guide ISBN: 978-0-13-674716-1 See inside ▸▸▸ for your Pearson Test Prep activation code and special offers

CompTIA Cybersecurity Analyst (CySA+) Analyst (CySA+) Cert Guide Premium Edition eBook and Practice Test To enhance your preparation, Pearson IT Certification also sells a digital Premium Edition of this book. The Premium Edition provides you with three eBook files (PDF, EPUB, and MOBI/Kindle) as well as an enhanced edition of the Pearson Test Prep practice test software. The Premium Edition includes two additional practice exams with links for every question mapped to the PDF eBook.

Special Offer–Save 80% This single-use coupon code will allow you to purchase a copy of the Premium Edition at an 80% discount. Simply go to the URL below, add the Premium Edition to your cart, and apply the coupon code at checkout. www.pearsonITcertification.com/title/9780136747123

Coupon Code:

DO NOT DISCARD THIS NUMBER You will need this activation code to activate your practice test in the Pearson Test Prep practice test software. To access the online version, go to www.PearsonTestPrep.com. Select Pearson IT Certification as your product group. Enter your email/password for your account. If you don’t have an account on PearsonITCertification.com or CiscoPress.com, you will need to establish one by going to PearsonITCertification.com/join. In the My Products tab, click the Activate New Product button. Enter the access code printed on this insert card to activate your product. The product will now be listed in your My Products page. If you wish to use the Windows desktop offline version of the application, simply register your book at www.pearsonITcertification.com/register, select the Registered Products tab on your account page, click the Access Bonus Content link, and download and install the software from the companion website. This activation code can be used to register your exam in both the online and the offline versions.

Activation Code:

Special Offers Save 80% on Premium Edition eBook and Practice Test The CompTIA Cybersecurity Analyst (CySA+) CS0-002 Premium Edition eBook and Practice Test provides three eBook files (PDF, EPUB, and MOBI/Kindle) to read on your preferred device and an enhanced edition of the Pearson Test Prep practice test software. You also receive two additional practice exams with links for every question mapped to the PDF eBook. See the card insert in the back of the book for your Pearson Test Prep activation code and special offers.

CompTIA®

Cybersecurity Analyst (CySA+) CS0-002 Cert Guide Companion Website Access interactive study tools on this book’s companion website, including practice test software, memory table, review exercises, Key Term flash card application, study planner, and more! To access the companion website, simply follow these steps: 1. Go to www.pearsonITcertification.com/register. 2. Enter the print book ISBN: 9780136747161.. 3. Answer the security question to validate your purchase. 4. Go to your account page. 5. Click on the Registered Products tab. 6. Under the book listing, click on the Access Bonus Content link.

If you have any issues accessing the companion website, you can contact our support team by going to http://pearsonitp.echelp.org.

Code Snippets Many titles include programming code or configuration examples. To optimize the presentation of these elements, view the eBook in single-column, landscape mode and adjust the font size to the smallest setting. In addition to presenting code and configurations in the reflowable text format, we have included images of the code that mimic the presentation found in the print book; therefore, where the reflowable format may compromise the presentation of the code listing, you will see a “Click here to view code image” link. Click the link to view the print-fidelity code image. To return to the previous page viewed, click the Back button on your device or app.