CompTIA Cloud+ Exam Prep Guide Exam CV0-003: Vendor Neutral Cloud Technology Certification Guide

Are you ready to master the CompTIA Cloud+ exam and embark on a rewarding journey in the world of Cloud technologies? Lo

102 28 10MB

English Pages 395 Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

CompTIA Cloud+ Exam Prep Guide Exam CV0-003: Vendor Neutral Cloud Technology Certification Guide

Citation preview

CompTIA Cloud+ (CV0-003) Secbay Press

Secbay Press Copyright © 2020-2024 Secbay Press. All rights reserved. No part of this certification student guide can be copied, stored in a retrieval system, or transmitted in any manner or through any method, without the prior written permission of Secbay Press, except in the case of brief references/quotations embedded in critical labs, articles, or reviews. Secbay Press has made every effort in the preparation of the courseware to ensure the accuracy of the information presented. However, the information contained in this student guide is sold without warranty, either express or implied. Neither the Authors/SME's, nor Secbay Press and its training partners, resellers and distributors can be held liable for any damage caused or alleged to be caused directly or indirectly by this student guide. Copyright Disclaimer: The information and content provided in this Book, including but not limited to text, graphics, images, and illustrations, are for informational purposes only and should not be considered as legal, financial, or professional advice. The author and publisher of this Book are not responsible for any errors or omissions, or for any actions taken based on the information provided in this Book. Any references or mentions of CompTIA in this Book are for informational purposes only and should not be construed as an endorsement or official representation of CompTIA. The author and publisher are not affiliated with, endorsed by, or otherwise connected to CompTIA or another agency. By accessing or using this Book, you acknowledge and agree to abide by the copyright laws and terms of use outlined in this disclaimer. Unauthorized use, reproduction, or distribution of this Book may result in legal action. Secbay Press has endeavored to provide accurate trademark information about all the companies and products mentioned in this book. However, Secbay Press cannot guarantee the accuracy of such information. Secbay Press has made its best effort to give attribution where applicable to any companies referenced in this book. Disclaimer/Note: The information within the Secbay Press cybersecurity or related books is intended to be used only in an ethical manner to protect your digital assets. Do not use any information from this book if you do not have legal authorization from the owner of the software, tools, or equipment. If you perform illegal actions, you are likely to be arrested and fully prosecuted of the law. Secbay Press does not take any responsibility if you misuse any of the information contained within the book. The information herein must only be used while authorized cloud environments with proper written authorizations from the management and your senior authorities within your organization according to your protocols and change management procedures. Published by: Secbay Press, P O Box 5074 Oak Brook, IL 60523 USA www.secbay.com [email protected]

© Secbay Press

2 | Page

Secbay Press

Contributions to this book: This book would not have been possible without significant contributions of the following people. These individuals have been instrumental in writing, editing, updating & proof reading the content. Thank you to CompTIA Cloud+ contributors as follows: ⬥ Parm Soni ⬥ Kris Hefner ⬥ Manoj ⬥ Salman ⬥ Rhea ⬥ Sonia ⬥ Hamsaraj and rest of the Secbay Team

Dedication: USAF

Dedicated to the fearless guardians of our digital skies – the United States Air Force. To the veterans who have valiantly served and the current warriors safeguarding our clouds from threats, your dedication and resilience inspire us all. This book stands as a tribute to your unwavering commitment to protecting our nation’s security in the vast expanse of the cyber realm. Saluting the past, honoring the present, and looking to a secure digital future together. Parm Soni Published by Secbay Press

© Secbay Press

3 | Page

Secbay Press About Parm Soni – Publisher, Founder & SME Mr. Soni is a distinguished executive and Subject Matter Expert (SME) in the certification and knowledge transfer industry with over 30 years of experience. As a Founder & Co-founder of several successful knowledge transfer companies, Mr. Soni and his team have delivered certification training to over 10,000 professionals worldwide, encompassing major leading vendors such as CompTIA, PMI, EC-Council, Microsoft, MicroFocus/Novell, and more. Mr. Soni is the pioneer behind one of the earliest on-line assessment engines and has been actively engaged in the biometrics field since 1995. He is also the visionary founder of the world’s first vendor-neutral biometrics certification. He holds a Bachelor’s in Electrical and Computer Engineering from IIT, Chicago, and an MSCS in Telecommunications from DePaul University (1993). He has served a multitude of Fortune 500 companies and federal organizations, delivering customized training programs and sharing his wealth of knowledge. Mr. Soni stands as a revered figure in the certification and knowledge transfer industry, offering unparalleled expertise and dedication to the advancement of professionals worldwide. Contact: [email protected]

© Secbay Press

4 | Page

Secbay Press

CertTips from CertAdvisor By Parm Soni Candidates who are pursuing technology certifications must be disciplined and understand what it requires to learn these technologies and pass these tough exams. Shortcuts by practicing only exam dumps are not the way to go! It’s a career disaster waiting to happen, as you will not have the confidence even if you secure a job. Here is my step-by-step advice to prepare for any certification exam:

CertAdvisor: Gateway to Certification The technology certification market is a dynamic ecosystem, featuring a multitude of respected vendors and industries, including CompTIA, EC-Council, Biocertification, Open Source (Linux, Python, Postgresqlcert, MySQL. Java, etc), AI, AWS, Microsoft, ISC(2), ISACA, and many more. It has evolved into a pivotal force in today’s job market, with certifications from these industry leaders serving as credible endorsements of individuals’ expertise. These certifications validate their skills and knowledge in specific technologies or domains, making them indispensable in a fast-paced tech landscape where innovations unfold rapidly. Employers have come to rely on certifications as reliable indicators of a candidate’s capabilities, simplifying the identification of qualified professionals. Consequently, technology certifications have become a currency of immense value in the job market, equipping individuals with the means to secure superior career opportunities, advance professionally, and contribute effectively to the ever-evolving tech industry. ●

Step 1: Understanding Prerequisites

Before embarking on your certification journey, it is crucial to thoroughly comprehend the prerequisites. For instance, if you’re pursuing the CompTIA Cloud+ certification, ensure that you’ve either completed the CompTIA Network+ certification or possess equivalent experience, as stipulated by CompTIA’s prerequisites. In the absence of these qualifications, consider delving into the Cloud+ curriculum. Should budget constraints be a concern, explore on-demand training options or immerse yourself in Cloud+ literature to establish a robust foundation in networking concepts. Whether you opt for instructor-led training, on-demand courses, or self-study, grasping prerequisites stands as the initial stepping stone to attaining advanced certifications. ●

Step 2: Time Management - Craft a Study Schedule

Effective time management is paramount. Create a well-structured study schedule, emphasizing the allocation of dedicated time each day or week for certification preparation. Consistency is the linchpin of success. ●

Step 3: Active Learning - Engage in Effective Techniques

Engage in active learning techniques to enhance understanding and retention. Develop or utilize flashcards, summarize daily notes, and, when possible, take on the role of an educator by elucidating concepts to fellow certification seekers. These strategies reinforce comprehension and retention, fostering a robust grasp of the material.

© Secbay Press

5 | Page

Secbay Press ●

Step 4: Domain Mastery Before Practice

Refrain from attempting practice or mock exams until you’ve achieved mastery in each domain and harbor confidence in your understanding. Create a checklist of topics and subtopics drawn from the certification program’s exhaustive agenda provided by the vendor. Focus on practicing exams from a specific domain only after achieving a minimum 80 percent success rate. Review questions answered incorrectly, utilizing them as opportunities for reverse learning, reinforcing your mastery of each domain and topic in a modular fashion. ●

Step 5: Community and Resources - Leverage Support Networks

I strongly encourage you to engage with online forums, study groups, or social media communities dedicated to your specific certification. These platforms provide invaluable spaces for discussing challenging topics, sharing educational resources, and gaining insights from the experiences of fellow certification aspirants. ●

Step 6: Practical Application - Embrace Hands-On Learning

For technical certifications, especially in areas like cybersecurity, cloud technology, or databases, underscore the significance of practical experience. I highly recommend utilizing virtual labs and handson exercises to apply theoretical knowledge. Practical application not only deepens understanding but also builds essential skills and self-confidence. ●

Step 7: Test-Taking Strategies - Comprehensive Preparation and Mock Exams

Once you’ve attained mastery across all domains, intensify your preparation by taking additional practice exams. When you feel ready for the actual test, consider investing in online mock exams from reputable sources like examIT.com. These simulations include a full money-back guarantee if you pass their online mock exam on your initial attempt but subsequently fail the actual test. Their remarkable success rate attests to their quality. If your scores persistently fall below 80 percent, revisit areas where you feel less confident. Scores below 70 percent necessitate careful consideration of attending instructor-led training sessions. Opt for virtual live sessions, as they offer cost-effective alternatives to traditional classroom training, which can be particularly beneficial for those pursuing self-study due to budget constraints. These simulated mock exams not only prepare you comprehensively but also equip you with essential test-taking strategies. Managing time during the actual exam, effectively eliminating incorrect answers, and the importance of flagging questions for review are all covered. These strategies significantly enhance your performance during the certification exam. Most vendors also offer similar services, albeit at a slightly higher cost, which you can consider as well. ●

Step 8: Refinement Through Additional Resources

After successfully passing mock exams, channel your efforts into refining your understanding of areas that continue to pose challenges. Utilize on-demand training, flashcards, or freely available YouTube videos to reinforce your knowledge. By meticulously adhering to these steps, you enhance your chances of achieving certification and, more importantly, cultivate the confidence essential for roles aligned with your certification. Always remember to shun shortcuts such as exam dumps, etc.

© Secbay Press

6 | Page

Secbay Press

USING THIS BOOK EFFECTIVELY Welcome to the world of cybersecurity analysts and the field of defensive cybersecurity! This guide is here to help you get the most out of your cybersecurity textbook. It’s vital to understand how to use this valuable resource to boost your learning journey. This book is essential for students studying on their own for this challenging certification or taking on-demand training. While the best choice is usually instructor-led classroom or live virtual training, not everyone has access to these options. This book is a fantastic resource for anyone looking to gain cybersecurity knowledge and prepare for the CompTIA Cloud+ exam. However, keep in mind that achieving success in the exam will require additional effort beyond what’s covered in this book. You’ll need to become familiar with the tools commonly used by cybersecurity analysts. Before we dive into the details, it’s important to note that the content in the cybersecurity textbook may include references to real-world test environments, simulated scenarios, trademarked names, and copyrighted materials. These are used solely for educational purposes, with no intention of infringing on rights. While we strive for accuracy, errors may still occur. Getting Started: Orientation: Begin by thoroughly reading the introduction and table of contents. Get to know how the textbook is structured, its chapters, and objectives. Study Environment: Create an ideal study space, free from distractions, with good lighting. A reliable internet connection is all you need to start. If you don’t have reliable internet access, you may be able to download the book if permitted by the provider. Important Note: The links in this book may change over time and may become inaccessible. Secbay Press is not responsible for broken links. The information is accurate as of the time of writing. Keep in mind that links change for various reasons, including organizational changes, company acquisitions, network issues, and rebranding. Chapter Selection: Depending on your current expertise level and course requirements, choose the chapters or sections to study. Even if some content is familiar, don’t skip any chapters to succeed in the certification exam. Active Reading: When reading, engage with the content actively. Take notes, highlight key points, and jot down questions or areas of confusion.

Testbed Environment and Simulated Scenarios: Hands-On Practice: Many cybersecurity textbooks include practical exercises and scenarios. Use these to gain hands-on experience in a controlled setting. You may come across terms like “testbed environment” or “simulated scenarios” - these are meant to provide a safe space for applying your knowledge without real-world risks. Virtual Labs: If your textbook is part of a training package, you may have access to virtual labs. Make the most of these resources. They simulate real-world cybersecurity situations, allowing you to test your © Secbay Press

7 | Page

Secbay Press skills in a risk-free environment. It’s highly recommended to have a Linux terminal to practice commands as you learn. Even better, consider installing a virtual machine operating system like Kali Linux or Ubuntu Linux in software like VMWare or VirtualBox. This provides an immersive experience for practical, handson learning. Also Check out www.examit.com as you may be able to purchase related exam resources at a very reasonable price. Review and Assessment: In your journey toward cybersecurity excellence, consider these aspects of review and assessment: • Regular Review: Make it a habit to revisit previously covered chapters periodically. This practice reinforces your understanding and ensures the long-term retention of the knowledge you’ve gained. • Self-Assessment: Utilize any available practice exams and assessments to assess your progress. This allows you to pinpoint areas that may require additional focus and improvement. Conclusion: By adhering to the aforementioned guidelines and leveraging supplementary resources, you can optimize your use of the official study guide. It’s important to recognize that cybersecurity is an ever-evolving field, demanding continuous learning. Your textbook serves as the foundation for your entry into the realm of defensive cybersecurity. Moreover, if you aspire to elevate your skills to the next level, consider the benefits of instructor-led training. This avenue provides a deeper dive into the tools and techniques used by cybersecurity analysts. It also offers practical hands-on experience, complementing the course materials and expanding your tradecraft in the field. Disclaimer: Before we delve into the specifics, it's important to note that the content within the cybersecurity textbook may include references to real-world testbed environments, simulated scenarios, trademarked names, and copyrighted materials. These are used for educational purposes only, and no infringement of rights is intended. The accuracy of the information is diligently maintained, but errors may still occur. For any concerns regarding copyrighted materials or inaccuracies, please contact the publisher. For more information regarding the exam prep and additional tools can be obtained at www.examit.com Happy learning!

© Secbay Press

8 | Page

Secbay Press

Table of Contents About this Program: ........................................................................................................................ 23 Target Audience:............................................................................................................................. 23 Program Pre-Requisites: ................................................................................................................. 23 What you’ll learn: ............................................................................................................................ 23 Objectives: ...................................................................................................................................... 24 DOMAIN - 1 .................................................................................................................................... 25 Cloud Architecture and Design........................................................................................................ 25 1.1

Compare and contrast the different types of cloud models. ............................................... 25

1.1.1

Deployment models ................................................................................................... 25

Public ................................................................................................................................... 25 Private .................................................................................................................................. 25 1.1.2

Service models .......................................................................................................... 27

1.1.3

Advanced cloud services ........................................................................................... 29

1.1.4 Shared responsibility model ............................................................................................ 31 1.2 Explain the factors that contribute to capacity planning. ......................................................... 32 1.2.1

Requirements ............................................................................................................ 32

1.2.2 Standard templates ......................................................................................................... 33 1.2.3 Licensing ......................................................................................................................... 34 1.2.4 User density .................................................................................................................... 35 1.2.5 System load .................................................................................................................... 36 1.2.6 Trend analysis................................................................................................................. 36 1.2.7 Performance capacity planning ....................................................................................... 37 1.3 Explain the importance of high availability and scaling in cloud environments. ...................... 39 1.3.1 Hypervisors ..................................................................................................................... 39 1.3.2 Oversubscription ............................................................................................................. 40 1.3.3 Regions and zones ......................................................................................................... 42 1.3.4 Applications..................................................................................................................... 44 1.3.5 Containers....................................................................................................................... 44 1.3.6 Clusters........................................................................................................................... 44 1.3.7 High availability of network functions ............................................................................... 45 1.3.8 Avoid single points of failure ............................................................................................ 47 1.3.9 Scalability ........................................................................................................................ 48 1.4 Given a scenario, analyze the solution design in support of the business requirements. ....... 49 1.4.1 Requirement analysis ...................................................................................................... 49 1.4.2 Environments .................................................................................................................. 53 1.4.3 Testing techniques .......................................................................................................... 54 © Secbay Press

9 | Page

Secbay Press Practice Questions .......................................................................................................................56 Answers to Practice Questions .................................................................................................64 DOMAIN - 2 .....................................................................................................................................69 Security............................................................................................................................................69 2.1

Given a scenario, configure identity and access management. ..........................................69

2.1.1

Identification and authorization ...................................................................................69

2.1.2

Directory services .......................................................................................................71

2.1.3

Federation ..................................................................................................................72

2.1.4

Certificate management ..............................................................................................73

2.1.5 Multifactor authentication (MFA).......................................................................................74 2.1.5

Single sign-on (SSO) ..................................................................................................74

2.1.6

Public key infrastructure (PKI) .....................................................................................75

2.1.8 Secret management .........................................................................................................75 2.2 Given a scenario, secure a network in a cloud environment. ..................................................76 2.2.1 Network segmentation .....................................................................................................76 2.2.2 Protocols ..........................................................................................................................77 2.2.3 Network services..............................................................................................................79 2.2.4 Network flows...................................................................................................................81 2.2.5 Hardening and configuration changes ..............................................................................81 2.3 Given a scenario, apply the appropriate OS and application security controls. .......................84 2.3.1 Policies ............................................................................................................................84 2.3.2 User permissions .............................................................................................................85 2.3.3 Antivirus/anti-malware/endpoint detection and response (EDR) .......................................87 2.3.4 Host-based IDS (HIDS)/ Host-based IPS (HIPS)..............................................................88 2.3.5 Hardened baselines .........................................................................................................89 2.3.6 Builds ...............................................................................................................................90 2.3.7 Encryption ........................................................................................................................90 2.3.8 Mandatory access control ................................................................................................91 2.3.9 Software firewall ...............................................................................................................92 2.4 Given a scenario, apply data security and compliance controls in cloud environments. ..........93 2.4.1 Encryption ........................................................................................................................93 2.4.2 Integrity ............................................................................................................................94 2.4.3 Classification ....................................................................................................................97 2.4.4 Segmentation ...................................................................................................................98 2.4.5

Access control ............................................................................................................99

2.4.6 Impact of laws and regulations .........................................................................................99 2.4.6

Records management...............................................................................................100

2.4.8 Data loss prevention (DLP) ............................................................................................102 © Secbay Press

10 | Page

Secbay Press 2.4.9 Cloud access security broker (CASB) ............................................................................103 2.5 Given a scenario, implement measures to meet security requirements.................................104 2.5.1 Tools ..............................................................................................................................104 2.5.2 Vulnerability assessment................................................................................................107 2.5.3 Security patches.............................................................................................................109 2.5.4 Risk register ...................................................................................................................111 2.5.5 Prioritization of patch application ....................................................................................112 2.5.6 Deactivate default accounts ...........................................................................................112 2.5.7 Impacts of security tools on systems and services .........................................................112 2.6 Explain the importance of incident response procedures. .....................................................113 2.6.1 Preparation ....................................................................................................................113 2.6.2 Incident response procedures ........................................................................................116 Practice Questions .....................................................................................................................122 Answers to Practice Questions ...............................................................................................129 DOMAIN - 3 ...................................................................................................................................134 Deployment....................................................................................................................................134 3.1 Incorporate elements into a cloud-based solution based on a given situation .......................134 3.1.1 Subscription services .....................................................................................................134 File subscriptions ................................................................................................................134 Communications..................................................................................................................134 Email ...................................................................................................................................134 Voice over IP (VoIP) ............................................................................................................135 Messaging ...........................................................................................................................135 Know the primary cloud delivery models..............................................................................135 Collaboration .......................................................................................................................135 Virtual desktop infrastructure (VDI) ......................................................................................136 Directory and identity services .............................................................................................137 Cloud resources ..................................................................................................................137 3.1.2 Provisioning resources ...................................................................................................138 Compute..............................................................................................................................138 Storage ...............................................................................................................................139 Network ...............................................................................................................................139 3.1.3 Application .....................................................................................................................139 Server less ..........................................................................................................................139 3.1.4 Deploying virtual machines (VMs) and custom images ..................................................140 3.1.5 Templates ......................................................................................................................141 OS templates ......................................................................................................................141 Solution templates ...............................................................................................................141 © Secbay Press

11 | Page

Secbay Press 3.1.6 Identity management......................................................................................................141 3.1.7

Containers ................................................................................................................142

3.1.8 Auto-scaling ...................................................................................................................143 3.1.9 Post-deployment validation ............................................................................................143 3.2 Given a scenario, provision storage in cloud environments. .................................................144 3.2.1 Types .............................................................................................................................144 Block ...................................................................................................................................144 File ......................................................................................................................................145 Object..................................................................................................................................146 3.2.2 Tiers ...............................................................................................................................147 3.2.3 Input/output operations per second (IOPS) and read/write .............................................148 3.2.4 Protocols ........................................................................................................................148 Network file system (NFS) ...................................................................................................149 Common Internet file system (CIFS)....................................................................................149 Internet small computer system interface (iSCSI) ................................................................149 Fiber Channel (FC)..............................................................................................................150 Non-volatile memory express over fabrics (NVMe-oF) ........................................................150 3.2.5 Redundant array of inexpensive disks (RAID) ................................................................152 3.2.6 Storage system features ................................................................................................155 Compression .......................................................................................................................155 Deduplication ......................................................................................................................156 Replication ..........................................................................................................................157 3.2.7 User quotas ...................................................................................................................157 3.2.8 Hyperconverged.............................................................................................................158 3.2.9 Software-defined storage (SDS).....................................................................................159 3.3 Given a scenario, deploy cloud networking solutions. ...........................................................160 3.3.1 Services .........................................................................................................................160 Dynamic host configuration protocol (DHCP) ......................................................................160 NTP- NETWORK TIME PROTOCOL ..................................................................................161 DNS ....................................................................................................................................162 Content delivery network (CDN) ..........................................................................................163 IP address management (IPAM) .........................................................................................164 3.3.2 Virtual private networks (VPNs)......................................................................................164 Site-to-site ...........................................................................................................................166 Point-to-point .......................................................................................................................166 Point-to-site .........................................................................................................................166 IPSec ..................................................................................................................................167 Multiprotocol label switching (MPLS) ...................................................................................167 © Secbay Press

12 | Page

Secbay Press 3.3.3 Virtual routing .................................................................................................................169 Dynamic and static routing ..................................................................................................169 Virtual network interface controller (vNIC) ...........................................................................169 Subnetting ...........................................................................................................................169 3.3.4 Network appliances ........................................................................................................171 Load balancers ....................................................................................................................171 Firewalls ..............................................................................................................................171 3.3.5 Virtual private cloud (VPC) .............................................................................................172 3.3.6 VLAN/VXLAN/GENEVE .................................................................................................174 3.3.7 Single root input/output virtualization (SR-IOV) ..............................................................175 3.3.8 Software-defined network (SDN) ....................................................................................175 3.4 In a given situation, determine the suitable compute resource allocation for a deployment. ..176 3.4.1 Virtualization ..................................................................................................................176 Simultaneous multi-threading (SMT) ...................................................................................176 Dynamic allocations ............................................................................................................177 Oversubscription .................................................................................................................178 3.4.2 Central processing unit (CPU)/ virtual CPU (vCPU) .......................................................178 3.4.4 Graphics processing unit (GPU) .....................................................................................179 Virtual ..................................................................................................................................180 Pass-through .......................................................................................................................180 3.4.5 Clock speed/Instructions per cycle (IPC) ........................................................................180 3.4.6 Hyperconverged .............................................................................................................180 3.4.7 Memory ..........................................................................................................................181 Dynamic allocation ..............................................................................................................181 Ballooning ...........................................................................................................................181 3.5 Given a scenario, perform cloud migrations. .........................................................................182 3.5.1 Physical to virtual (P2V) .................................................................................................182 3.5.2 Virtual to virtual (V2V) ....................................................................................................183 3.5.3 Cloud-to-cloud migrations ..............................................................................................183 Vendor lock-in .....................................................................................................................184 PaaS or SaaS migrations ....................................................................................................184 3.5.4 Storage migrations .........................................................................................................187 Block ...................................................................................................................................187 File ......................................................................................................................................187 Object ..................................................................................................................................188 3.5.5 Database migrations ......................................................................................................189 Cross-service migrations .....................................................................................................190 Relational ............................................................................................................................191 © Secbay Press

13 | Page

Secbay Press Non-relational ......................................................................................................................191 Practice Questions .....................................................................................................................192 Answers to Practice Questions ...............................................................................................198 DOMAIN - 4 ...................................................................................................................................203 Operations and Support .................................................................................................................203 4.1 In a given scenario, incorporate components into a cloud-based solution. ............................203 4.1.1 Logging ..........................................................................................................................203 Collectors .......................................................................................................................203 Simple Network Management Protocol (SNMP)...........................................................203 Syslog .............................................................................................................................204 Analysis...............................................................................................................................204 Severity categorization ........................................................................................................204 Audits ..................................................................................................................................205 Types ..................................................................................................................................205 Automation ..........................................................................................................................205 Trending ..............................................................................................................................205 4.1.2 Monitoring ......................................................................................................................206 Baselines ............................................................................................................................206 Thresholds ..........................................................................................................................206 Tagging ...............................................................................................................................206 Log scrubbing......................................................................................................................207 Performance monitoring ......................................................................................................207 Infrastructure components ...................................................................................................207 Resource utilization .............................................................................................................207 Availability ...........................................................................................................................208 Verification of continuous monitoring activities ....................................................................208 Service management tool integration ..................................................................................208 4.1.3 Alerting...........................................................................................................................209 Common messaging methods .............................................................................................209 Enable/disable alerts ...........................................................................................................209 Appropriate responses ........................................................................................................209 Policies for categorizing and communicating alerts .............................................................209 4.2 Given a scenario, maintain efficient operation of a cloud environment. .................................211 4.2.1 Confirm completion of backups ......................................................................................211 4.2.2 Life-cycle management ..................................................................................................211 Roadmaps ...........................................................................................................................211 Old/current/new versions .....................................................................................................212 Upgrading and migrating systems .......................................................................................212 © Secbay Press

14 | Page

Secbay Press Deprecations or end of life ...................................................................................................212 4.2.3 Change management .....................................................................................................213 4.2.4 Asset management ........................................................................................................214 Configuration management database (CMDB) ....................................................................214 4.2.5 Patching .........................................................................................................................215 Features or enhancements ..................................................................................................215 Fixes for broken or critical infrastructure or applications ......................................................216 Scope of cloud elements to be patched ...............................................................................216 Rollback ..............................................................................................................................220 4.2.6 Impacts of process improvements on systems ...............................................................220 4.2.7 Upgrade methods...........................................................................................................221 Rolling upgrades .................................................................................................................221 Blue-green...........................................................................................................................221 Canary ................................................................................................................................221 Active-passive .....................................................................................................................222 Development/QA/production/DR .........................................................................................222 4.2.8 Dashboard and reporting................................................................................................223 Tagging ...............................................................................................................................223 Costs ...................................................................................................................................223 Elasticity usage ...................................................................................................................224 Connectivity.........................................................................................................................224 Latency ...............................................................................................................................225 Capacity ..............................................................................................................................225 Incidents ..............................................................................................................................225 Health ..................................................................................................................................225 Overall utilization .................................................................................................................225 Availability ...........................................................................................................................225 4.3 Given a scenario, optimize cloud environments. ...................................................................227 4.3.1 Rightsizing .....................................................................................................................227 Auto-scaling ........................................................................................................................228 Horizontal scaling ................................................................................................................228 Vertical scaling ....................................................................................................................228 Cloud bursting .....................................................................................................................228 4.3.2 Compute ........................................................................................................................228 CPUs...................................................................................................................................228 GPUs ..................................................................................................................................228 Memory ...............................................................................................................................229 Containers ...........................................................................................................................230 © Secbay Press

15 | Page

Secbay Press 4.3.3 Storage ..........................................................................................................................230 Tiers ....................................................................................................................................230 IOPS ...................................................................................................................................230 Deduplication ......................................................................................................................230 Compression .......................................................................................................................231 4.3.4 Network..........................................................................................................................231 Bandwidth ...........................................................................................................................231 Network interface controllers (NICs) ....................................................................................231 Latency ...............................................................................................................................232 SDN ....................................................................................................................................232 Edge computing ..................................................................................................................232 4.3.5 Placement ......................................................................................................................232 Geographical .......................................................................................................................232 Cluster placement ...............................................................................................................232 Redundancy ........................................................................................................................233 Colocation ...........................................................................................................................233 4.3.6 Device drivers and firmware ...........................................................................................233 Generic ...............................................................................................................................233 Vendor ................................................................................................................................234 Open source........................................................................................................................234 4.4 In a given situation, employ appropriate automation and orchestration methods. .................235 4.4.1 Infrastructure as code ....................................................................................................235 Infrastructure components and their integration ...................................................................235 4.4.2 Continuous integration/ continuous deployment (CI/CD) ................................................235 4.4.3 Version control ...............................................................................................................235 4.4.4 Configuration management ............................................................................................236 Playbook .............................................................................................................................236 4.4.5 Containers .....................................................................................................................237 4.4.6 Automation activities ......................................................................................................238 Routine operations ..............................................................................................................238 Updates...............................................................................................................................238 Scaling ................................................................................................................................238 Shutdowns ..........................................................................................................................238 Restarts...............................................................................................................................238 Create internal APIs ............................................................................................................239 4.4.7 Secure scripting .............................................................................................................239 No hardcoded passwords ....................................................................................................239 Use of individual service accounts .......................................................................................239 © Secbay Press

16 | Page

Secbay Press Password vaults ..................................................................................................................239 4.4.8 Orchestration sequencing ..............................................................................................239 4.5 Given a scenario, perform appropriate backup and restore operations. ................................240 4.5.1 Backup types .................................................................................................................240 Incremental .........................................................................................................................240 Differential ...........................................................................................................................240 Full ......................................................................................................................................241 Snapshot .............................................................................................................................241 4.5.2 Backup objects ...............................................................................................................242 Application-level backup ......................................................................................................242 File system backup ..............................................................................................................242 Database dumps .................................................................................................................243 Configuration files................................................................................................................243 4.5.3 Backup targets ...............................................................................................................243 Tape ....................................................................................................................................243 Disk .....................................................................................................................................244 Object ..................................................................................................................................244 4.5.4 Backup and restore policies ...........................................................................................245 Retention .............................................................................................................................245 Schedules ...........................................................................................................................245 Location...............................................................................................................................246 SLAs ...................................................................................................................................246 Recovery time objective (RTO) ............................................................................................246 Recovery Point Objective ....................................................................................................246 3-2-1 rule .............................................................................................................................248 4.5.5 Restoration methods ......................................................................................................249 In place ...............................................................................................................................249 Alternate location.................................................................................................................249 Restore files ........................................................................................................................249 Snapshot .............................................................................................................................249 4.6 Given a scenario, perform disaster recovery tasks................................................................253 4.6.1 Failovers ........................................................................................................................253 4.6.2 Failback .........................................................................................................................254 4.6.3 Restore backups ............................................................................................................258 4.6.4 Replication .....................................................................................................................258 4.6.5 Network configurations ...................................................................................................259 4.6.6 On-premises and cloud sites ..........................................................................................259 Hot ......................................................................................................................................259 © Secbay Press

17 | Page

Secbay Press Warm ..................................................................................................................................260 Cold.....................................................................................................................................260 4.6.7 Requirements.................................................................................................................261 RPO ....................................................................................................................................261 RTO ....................................................................................................................................261 SLA .....................................................................................................................................262 4.6.8 Documentation ...............................................................................................................263 DR kit ..................................................................................................................................263 Playbook .............................................................................................................................263 Network diagram .................................................................................................................263 4.6.9 Geographical datacenter requirements ..........................................................................263 Practice Questions .....................................................................................................................264 Answers to Practice Questions ...............................................................................................270 DOMAIN - 5 ...................................................................................................................................275 Troubleshooting .............................................................................................................................275 5.1 Given a scenario, use the troubleshooting methodology to resolve cloud-related issues. .....275 Identify the problem. ............................................................................................................275 Establish a plan of action to resolve the problem and implement the solution. .....................277 Verify full system functionality and, if applicable, implement preventive measures ..............277 Document the findings, actions, and outcomes throughout the process ..............................277 5.2 Given a scenario, troubleshoot security issues. ....................................................................278 5.2.1 Privilege .........................................................................................................................278 Missing ................................................................................................................................278 Incomplete...........................................................................................................................278 Escalation............................................................................................................................278 Keys ....................................................................................................................................278 5.2.2 Authentication ................................................................................................................279 5.2.3 Authorization ..................................................................................................................279 5.2.4 Security groups ..............................................................................................................280 Network security groups ......................................................................................................280 Directory security groups .....................................................................................................280 5.2.5 Keys and certificates ......................................................................................................281 Expired ................................................................................................................................281 Revoked ..............................................................................................................................281 Trust ....................................................................................................................................281 Compromised ......................................................................................................................281 Misconfigured ......................................................................................................................282 5.2.6 Misconfigured or misapplied policies ..............................................................................282 © Secbay Press

18 | Page

Secbay Press 5.2.7 Data security issues .......................................................................................................283 Unencrypted data ................................................................................................................283 Data breaches .....................................................................................................................283 Lack of encryption in protocols ............................................................................................284 Insecure ciphers ..................................................................................................................284 5.2.8 Exposed endpoints.........................................................................................................285 5.2.9 Misconfigured or failed security appliances ....................................................................285 IPS ......................................................................................................................................285 IDS ......................................................................................................................................286 NAC ....................................................................................................................................286 WAF ....................................................................................................................................286 5.2.10 Unsupported protocols .................................................................................................287 5.2.11 External/internal attacks ...............................................................................................287 5.3 Given a scenario, troubleshoot deployment issues. ..............................................................287 5.3.1 Connectivity issues ........................................................................................................287 Cloud service provider (CSP) or Internet service provider (ISP) outages .............................287 5.3.2 Performance degradation ...............................................................................................288 Latency ...............................................................................................................................289 5.3.3 Configurations ................................................................................................................289 Scripts .................................................................................................................................289 5.3.4 Applications in containers...............................................................................................290 5.3.5 Misconfigured templates ................................................................................................293 5.3.6 Missing or incorrect tags ................................................................................................294 5.3.7 Insufficient capacity ........................................................................................................295 Compute..............................................................................................................................295 Storage ...............................................................................................................................295 Bandwidth issues ................................................................................................................296 Oversubscription .................................................................................................................296 5.3.8 Licensing issues .............................................................................................................298 5.3.9 Vendor-related issues ....................................................................................................298 Migrations of vendors or platforms.......................................................................................298 Integration of vendors or platforms ......................................................................................299 API request limits ................................................................................................................299 Cost or billing issues............................................................................................................299 5.4 Given a scenario, troubleshoot connectivity issues. ..............................................................300 5.4.1 Network security group misconfigurations ......................................................................300 ACL .....................................................................................................................................300 Inheritance ..........................................................................................................................300 © Secbay Press

19 | Page

Secbay Press 5.4.2 Common networking configuration issues ......................................................................301 Peering................................................................................................................................301 Incorrect subnet ..................................................................................................................302 Incorrect IP address ............................................................................................................302 Incorrect IP space ...............................................................................................................303 Routes.................................................................................................................................303 Firewall................................................................................................................................305 Network address translation (NAT) ......................................................................................306 Load balancers ....................................................................................................................307 DNS records........................................................................................................................308 VLAN/VXLAN/GENEVE ......................................................................................................308 Proxy ...................................................................................................................................309 Maximum transmission unit (MTU) ......................................................................................310 Quality of service (QoS) ......................................................................................................310 Time synchronization issues ...............................................................................................312 5.4.3 Network troubleshooting tools ........................................................................................313 Ping .....................................................................................................................................313 tracert / traceroute ...............................................................................................................314 flushdns...............................................................................................................................315 ipconfig/ifconfig/ip ................................................................................................................315 nslookup/dig ........................................................................................................................316 netstat/ss.............................................................................................................................317 Route ..................................................................................................................................318 •

ARP Address Resolution Protocol ................................................................................319

Curl .....................................................................................................................................320 Packet capture ....................................................................................................................321 Packet analyzer...................................................................................................................321 5.5 Given a scenario, troubleshoot common performance issues. ..............................................322 5.5.1 Resource utilization ........................................................................................................322 CPU ....................................................................................................................................322 GPU ....................................................................................................................................322 Storage ...............................................................................................................................323 Network latency ...................................................................................................................323 Replication ..........................................................................................................................323 5.5.2 Application .....................................................................................................................323 Memory management .........................................................................................................324 Service overload..................................................................................................................324 5.6 Given a scenario, troubleshoot automation or orchestration issues ......................................324 © Secbay Press

20 | Page

Secbay Press 5.6.1 Account mismatches ......................................................................................................324 5.6.2 Change management failures ........................................................................................325 5.6.3 Server name changes ....................................................................................................325 5.6.4 IP address changes .......................................................................................................326 5.6.5 Location changes ...........................................................................................................326 5.6.6 Version/feature mismatch ...............................................................................................327 5.6.7 Automation tool incompatibility .......................................................................................327 Deprecated features ............................................................................................................328 API version incompatibility ...................................................................................................328 5.6.8 Job validation issue ........................................................................................................329 5.6.9 Patching failure ..............................................................................................................330 Practice Questions .....................................................................................................................332 Answers to Practice Questions ...............................................................................................337 Full Length Practice Questions ......................................................................................................341 Answers to Full Length Practice Questons .....................................................................................357 Glossary.........................................................................................................................................364 Index ..............................................................................................................................................386

© Secbay Press

21 | Page

Secbay Press

List of Figures Figure 1 Cloud deployment Model ................................................................................................26 Figure 2 Cloud Service models .....................................................................................................28 Figure 3 Shared responsibility model ............................................................................................31 Figure 4 Performance capacity planning .......................................................................................38 Figure 5 CPU Compute Options....................................................................................................40 Figure 6 Database Class...............................................................................................................40 Figure 7 Microsoft Azure Regions .................................................................................................43 Figure 8 Clusters...........................................................................................................................45 Figure 9 Single Web Server ..........................................................................................................46 Figure 10 Multiple Web Servers with Load Balancing ...................................................................47 Figure 11 AWS EC2 Instance Types .............................................................................................50 Figure 12 Lightweight directory access protocol (LDAP) ...............................................................72 Figure 13 Federation Login Example ............................................................................................72 Figure 14 Invalid Certificate Warning ............................................................................................73 Figure 15 List of CAs on a Google Chrome Browser .....................................................................74 Figure 16 Public key infrastructure (PKI) .......................................................................................75 Figure 17 Reverse Proxy ..............................................................................................................82 Figure 18 Forward proxy ...............................................................................................................83 Figure 19 AWS Password Policy...................................................................................................84 Figure 20 Changing the Permissions of a File in Windows ............................................................86 Figure 21 Symbolic Permission Values .........................................................................................87 Figure 22 Host-based IPS (HIPS) .................................................................................................88 Figure 23 Mandatory access control .............................................................................................92 Figure 24 Data loss prevention (DLP) .........................................................................................101 Figure 25 Cloud access security broker (CASB) .........................................................................104 Figure 26 Sample RACI Matrix ....................................................................................................116 Figure 27 Virtual Desktop............................................................................................................137 Figure 28 Network file system (NFS)...........................................................................................149 Figure 29 RAID 0 ........................................................................................................................152 Figure 30 RAID 1 ........................................................................................................................152 Figure 31 RAID 5 ........................................................................................................................153 Figure 32 RAID 6 ........................................................................................................................154 Figure 33 RAID 10 ......................................................................................................................155 Figure 34 Virtual Private Network ................................................................................................ 166 Figure 35 aws service health dashboard .....................................................................................288 Figure 36 CPU compute Options ................................................................................................ 295

© Secbay Press

22 | Page

Secbay Press

About this Program: CompTIA Cloud+ is a worldwide certification that verifies the abilities required to create and operate secure cloud environments that ensure corporate systems and data are always available. The CompTIA Cloud+ certification stands out as the sole performance-based IT certification that encompasses cloud-based infrastructure services within the broader realm of IT system operations, regardless of the chosen platform. Transitioning to the cloud opens avenues for the deployment, enhancement, and safeguarding of vital software and data storage. The CompTIA Cloud+ certification validates the technical proficiency essential for securing these invaluable assets. The reality of managing multicolor settings brings with it a new set of issues. CompTIA Cloud+ is designed for cloud engineers who need to be knowledgeable about a variety of products and systems.

Target Audience: ● ● ● ● ● ● ● ● ● ●

Systems Administrator Systems Engineer Cloud Engineer Network Administrator Network Engineer Cloud Specialist Cloud Project Manager Systems and Network Administrators Field Service Technician Cloud Developer

Program Pre-Requisites: CompTIA Network+ and Server+ certifications, as well as two to three years of expertise in systems administration or networking are required. ● ● ● ● ● ●

A minimum of 2-3 years of practical experience in the fields of IT systems administration or IT networking is required. CompTIA Network+ and Server+ or equivalent knowledge Familiarity with any major hypervisor technology for server virtualization Knowledge of cloud service models Knowledge of IT service management Practical, hands-on familiarity with either a public or private cloud Infrastructure as a Service (IaaS) platform is necessary.

What you’ll learn: The CompTIA Cloud+ Study Guide teaches candidates how to comprehend cloud architecture and design, install cloud services and solutions, successfully maintain, protect, and optimize a cloud environment, troubleshoot typical cloud management issues, and prepare for the CompTIA Cloud+ certification test.

© Secbay Press

23 | Page

Secbay Press The following topics are covered in the cloud networking course: ● ● ● ●

Designing a cloud Cloud network management Keeping cloud resources safe Availability and optimization of the cloud

In addition, the usage of automation and orchestration has been emphasized, and troubleshooting is covered throughout the course.

Objectives: The following table outlines the domain areas assessed by this examination and provides an approximate indication of their representation in the test. Domain

Percentage of Examination

1.0 Cloud Architecture and Design

13%

2.0 Security

20%

3.0 Deployment

23%

4.0 Operations and Support

22%

5.0 Troubleshooting

22%

Total

100%

© Secbay Press

24 | Page

Secbay Press

DOMAIN - 1 Cloud Architecture and Design 1.1 Compare and contrast the different types of cloud models. 1.1.1 Deployment models Moving apps and services to the cloud is a difficult decision for many businesses. For example, a business must decide which apps or services are candidates for cloud migration, who will oversee the transfer, and who will oversee running these services after they are in the cloud. Another thing to think about is the cloud deployment model to use. There are four sorts of clouds: ● ● ● ●

Public Images Private Images Hybrid Images Community

Public CSP makes computing components available to the general public, resources shared through multitenant hosting, shares hardware with multiple clients, hardware managed by the provider, low cost option. The majority of people see a public cloud when they think of "the cloud." A public cloud is a shared platform that anybody may use to meet their cloud computing needs. Consumers (cloud clients) may grow their cloud deployments while taking advantage of economies of scale in this fashion. The benefit of using a public cloud is that you normally only pay for what you use by the minute or hour (or you may make upfront commitments to reduce continuing expenses if you know your organization's minimum compute, storage, bandwidth, and usage for the next one to three years). For example, if you establish a virtual machine (VM) in a public cloud, you normally pay for it while it is operational, and then you don't have to pay for it again after it is decommissioned. A pay-as-yougo (or PAYG) or subscription-based service is what this is called. There are several well-known public cloud platform providers, including ● ● ●

Images Amazon Web Services (AWS) Images Microsoft Azure Images Google Cloud Platform (GCP)

Private Private cloud organization sets up their own centralized data center to provide compute components for their offices, everything in the organization’s control, banks and financial institutions may use this. While this technique appears to be ideal, it has several drawbacks. A private cloud, for example, is substantially more expensive (than a public cloud deployment) since the business must pay for all © Secbay Press

25 | Page

Secbay Press physical resources up front, regardless of whether they are presently being used. Capital expenditure, or capex, is the term for this. Another issue is that private clouds often do not have as many hardware resources as public cloud environments, making it difficult to quickly scale up (by using overprovisioned or significant quantities of idle hardware resources). As a result, organizations will seldom shift all of their services to a private cloud, preferring instead to employ a hybrid cloud. This is, of course, outside of a government agency like the Department of Defense. Hybrid The fusion of multiple separate cloud infrastructures, including private, community, or public clouds, each retaining its distinct identity, yet interconnected through standardized or proprietary technology facilitating the transfer of data and applications. An example of this interoperability is the capability for cloud bursting to distribute the load seamlessly across these cloud environments. A hybrid cloud allows a company to benefit from the greatest features of both public and private clouds. Resources that require a high level of security or must adhere to tight regulatory compliance standards are housed in the private cloud in this situation. The distinction between "private" and "public" resources is defined by rules. This provides more flexibility at a reduced total cost. Community Set aside exclusively for utilization by a particular group of consumers from organizations with shared interests, encompassing factors such as mission, security prerequisites, policies, and compliance concerns. Possession, administration, and operation of this resource can be attributed to one or multiple organizations within the community, an external entity, or a collaborative arrangement among them. Furthermore, this resource might be situated either on-site or off-site.. A community cloud should be employed in this scenario. Multi-tenancy is handled differently in a community cloud than it is in a traditional public cloud. On a public cloud, multi-tenancy aims to totally isolate resources across tenants. Multitenancy in a community cloud allows users to share resources and apps. A community cloud allows these organizations to work together more effectively while still maintaining control over their users, resources, and services.

© Secbay Press

26 | Page

Secbay Press Cloud within a cloud Remember that on a public cloud, multitenancy refers to the use of the same hardware resources by several companies at the same time. Because public cloud providers don't want these firms to be able to view each other's resources, they employ a virtual private cloud (VPC). A VPC is the same as a virtual network in some contexts, such as Azure (VNet). Even if other organizations have resources on the same hardware, a VPC provides characteristics that allow an organization to view just its own resources. Two organizations, for example, could share a physical network, but they can only view traffic transmitted from or to their own resources. Even if other organizations have resources on the same hardware, a VPC provides characteristics that allow an organization to view just its own resources. Two organizations, for example, could share a physical network, but they can only view traffic transmitted from or to their own resources. Multicloud Multicloud is a solution that some businesses may use, despite the fact that it isn't one of the four traditional cloud deployment models. A multicloud is a heterogeneous concept that arises when an enterprise uses more than one cloud platform to host its services, whether private or public. An business can, for example, run its IaaS workloads on AWS, its PaaS workloads on Azure, and its data-focused workloads on GCP. On-premises databases are still possible. Multiclouds may be complicated, and integrating the many public cloud systems may need specialist software. Because this specialized software isn't always widely accessible, the business that uses the multicloud must design it. Why would you want to utilize a multicloud? Different cloud suppliers offer different services and are located in different parts of the world. In large enterprises, one cloud vendor's solutions may not satisfy the demands of one department, but they may be a perfect fit for the needs of another. Contractual and unavailability issues, such as vendor lock-in, can also be mitigated by using a multicloud. Multitenancy A multi-tenant cloud refers to a cloud computing framework wherein customers can jointly utilize computing resources within either a public or private cloud environment. The data of each tenant is segregated and remains inaccessible to other tenants. In this multi-tenant cloud model, users are allocated dedicated storage space for housing their projects and data, ensuring privacy and separation. AWS Lambda is an example of multitenancy

1.1.2 Service models A cloud service model is a method of classifying cloud characteristics into one of three groups: ● ● ●

Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS)

© Secbay Press

27 | Page

Secbay Press The benefits of classifying services into one of these service models are numerous.

One of the most significant benefits is gaining an understanding of which aspects of the service are the duty of the cloud provider and which aspects are the responsibility of the client. The shared responsibility model is the name for this approach. The main benefit is being able to comprehend the sort of workload that would occur in the cloud rather than on-premises. For example, an on-premises virtual machine may be converted to a container design and hosted as a PaaS service on your preferred cloud platform. Because the underlying operating system and hardware are now the responsibility of the cloud provider, the patching procedure and underlying infrastructure specifications are no longer important. Infrastructure as a Service (IaaS) The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure The applications are accessible from various client devices thin client interface, web browser, or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user specific application configuration settings Let's say you want to create a service that has access to the hardware offered by the cloud platform. You could wish to use the cloud to deploy a virtual computer, for example. Infrastructure as a Service would be the name for this service (IaaS). IaaS services need some level of access to the underlying hardware. IaaS examples: ●

BigCommerce, https://www.bigcommerce.com



Google Apps https://get.google.com/apptips/apps/#!/all



Salesforce https://www.salesforce.com/



Dropbox https://dropbox.com



MailChimp https://mailchimp.com



ZenDesk http://zendesk.com

© Secbay Press

28 | Page

Secbay Press ●

DocuSign https://docusign.com



Slack https://www.slack.com



Hubspot https://www.hubspot.com

Platform as a Service (PaaS) A cloud computing paradigm where a third-party provider offers both the hardware and software tools typically required for application development to users via the internet. The PaaS (Platform as a Service) provider manages and hosts both the hardware and software on their infrastructure, eliminating the need for users to set up in-house hardware.. Examples: ●

AWS Elastic Beanstalk https://aws.amazon.com/elasticbeanstalk/



Heroku- https://www.heroku.com/



Windows Azure (mostly used as PaaS) https://azure.microsoft.com



Force.com



OpenShift https://www.openshift.com/



Apache Stratos https://stratos.apache.org/ has been retired



Magento Commerce Cloud (Adobe) https://magento.com/products/magento-commerce



The necessary hardware and software for creating or operating a new application.

Software as a Service (SaaS) Most likely, you've used a SaaS, if not more than one. You have used SaaS if you have utilized any of the following: ● ● ● ● ● ●

Microsoft Office 365 Google Apps (like Google Docs and Google Mail) DocuSign Dropbox Zoom Salesforce.

"These are merely programs that run through my web browser," you would say, and that is precisely the purpose. The client utilizes software that is hosted remotely via SaaS. (typically in the cloud).

1.1.3 Advanced cloud services Several cloud-based services can be categorized as advanced. You should know at least the following for the Cloud+ exam: Internet of Things, serverless computing, machine learning, and artificial intelligence are all buzzwords these days.

© Secbay Press

29 | Page

Secbay Press Internet of Things (IoT) In recent years, the IoT industry has grown, with several commercial solutions now widely available. They include "controller" gadgets such as Amazon Echo, smart doorbells, smart light bulbs, smart appliances (washing machines, refrigerators, and other similar devices), and many others. What does it imply when a gadget is classified as an Internet of Things (IoT) device? The basic features of an IoT device are localized computation and the capacity to connect with a hub over the Internet, where it may submit gathered data and receive instructions for the next set of tasks. IoT devices aren't just for household appliances. In reality, IoT began in the manufacturing business, where it was used to leverage automation and remote machine-to-machine connections. IoT devices are used to track traffic, assess the efficiency of manufacturing plants, and handle everyday shipment from enormous shipping containers to millions of small items.

Serverless As mentioned before in this chapter in the "Platform as a Service (PaaS)" section, a cloud provider offers a platform (cloud-based operating system), and a developer produces an application that runs on the platform. In certain circumstances, an application may be complex enough to require a comprehensive platform, although this may not be the case for smaller apps. Instead, the program might be launched as a "serverless" app, which consumes fewer resources and is less expensive. In some ways, the phrase "serverless" is deceptive. The program is still hosted on a server. A serverless application is a bundle of code that may execute on hardware while abstracting the hardware and any underlying dependencies from the user, which is usually a developer. A Java developer, for example, may not be aware of the hardware and operating system (Windows or Linux) needed to execute the Java environment, allowing them to concentrate only on development. Creating a serverless application has the advantage of being significantly more cost effective. The program only runs when it's needed (it's usually started by an action called a trigger), and the client only pays when it's operating. When the application completes its job, it shuts down, and the consumer isn't charged again until the application relaunches. The customer pays for the platform in a PaaS situation, regardless of whether the application is completing any activities. The following are some examples of serverless products: ● ● ●

Azure Functions Images Google Cloud Functions Images AWS Lambda

© Secbay Press

30 | Page

Secbay Press Machine learning/Artificial intelligence (AI) "Difference between machine learning and AI: If it's written in Python, it's presumably machine learning," Mat Velloso famously tweeted. It's probably AI if it's written in PowerPoint." This joke plays on the machine intelligence community's displeasure with the recent dilution of the term artificial intelligence. The joke also makes a lighthearted attempt to explain the distinction between machine learning and artificial intelligence. The accompanying tweet implies that machine learning and AI are two distinct concepts, yet machine learning is a component of AI. The objective of AI is to develop software that can think in the same way that humans do. This is a fairly difficult issue, but the idea is to get software to the point where it can make decisions on its own rather than being told what to do.

1.1.4 Shared responsibility model Customers should understand what portions of a cloud service they are responsible for maintaining and what parts the cloud platform provider is responsible for. A typical overview of these roles may be seen in the table. On-Prem

IaaS

PaaS

SaaS

Data

Data

Data

Data

Application

Application

Application

Application

Application Security

Application Security

Application Security

Application Security

Runtime

Runtime

Runtime

Runtime

Middleware

Middleware

Middleware

Middleware

OS

OS

OS

OS

Virtualization

Virtualization

Virtualization

Virtualization

Server Hardware

Server Hardware

Server Hardware

Server Hardware

Storage

Storage

Storage

Storage

Networking

Networking

Networking

Networking

Physical Security

Physical Security

Physical Security

Physical Security

Figure 3 Shared responsibility model

The boxes with a gray background in Table denote the cloud provider's obligation, while the ones with a white background denote the customer's liability. The following examples are meant to help clarify things: ●

A client uses Amazon EC2 to establish a virtual computer in the AWS cloud. The customer is responsible for patching and maintaining the operating system and any software that is installed

© Secbay Press

31 | Page

Secbay Press on it. The virtual computer is installed on Amazon's servers, which Amazon is responsible for protecting and maintaining. ●

On Azure App Service, a customer develops an application and hosts it. The client is now in charge of ensuring that the application's security is up to date and that it is updated on a regular basis. Azure oversees patching the operating system, maintaining safe and dependable storage, and ensuring a secure and speedy network.



A consumer creates a document in Google Docs. Although the client is responsible for the data contained in the document and, in some situations, the security of this data, Google is responsible for almost everything linked to the application. For example, Google Docs allows users to exchange documents, making that function the user's responsibility rather than Google's.

1.2 Explain the factors that contribute to capacity planning. 1.2.1 Requirements Hardware - horizontal scaling vs. vertical scaling- Buy bigger boxes (Capital Expenditures) Required Memory = (Target Requests Per Second) * (Average Length of Request in Seconds) * (Average Size of Process At Steady State) Software - buy more licensing, BYOL- Bring your own license, operating expenditures. Budgetary - the idea that your capacity may be driven by your budget. Business Need Analysis - drives what apps are moved to the cloud. Capacity planning aims to align demand with available resources. This process involves assessing existing systems, evaluating their performance, and identifying usage patterns to make informed predictions about demand. Resources are then allocated and provisioned to meet this demand. While capacity planning does involve performance measurement and may contribute to expertise for performance improvement, its primary objective is to accommodate workloads rather than enhance efficiency. The central focus is not on performance tuning or optimization. To effectively adjust a system's capacity, it is crucial to first comprehend the workload it handles and characterize it accordingly. A system employs resources to fulfill cloud computing requirements, including processor, memory, storage, and network capabilities. Each of these resources has a utilization rate, and under increased demand, one or more of these resources may reach a limit, constraining performance. The capacity planner's goal is to pinpoint the critical resource with this limitation and introduce additional resources to elevate the threshold for increased demand. Scaling a system can be achieved either through vertical scaling, involving more powerful systems, or horizontal scaling, which entails adding more, albeit less powerful, systems. Perhaps we can conceive a system capable of simultaneously accommodating both service levels, but I © Secbay Press

32 | Page

Secbay Press harbor reservations about its practical implementation. While this concept appears feasible in theory, and cloud computing provides a novel avenue for exploring such perfectly scalable solutions, the practical execution is fraught with challenges. Even in cloud environments, we are likely to encounter service providers capable of scaling higher at a higher cost, while others prioritize cost-efficiency over global scalability. This will necessitate customers to make choices between these options. In fact, we will likely offer a range of choices to our customers, alongside those we provide in-house or source from cloud or other outsourcing vendors. Furthermore, this approach can serve as a means to regulate demand, not merely to configure the supply side. Customers utilizing elastic services will determine their usage based on budget considerations. In certain scenarios, they may opt to refrain from expanding their capacity due to budget constraints. The conventional belief that "more is better" (i.e., upsizing) will give way to the concept of "right-sizing" – ensuring just enough capacity, precisely when needed..

1.2.2 Standard templates Standard templates use scripting tools to create individual server builds that can become the basis for other cloud applications ●

Exist in between IaaS and PaaS



Allow IT professionals to manage the configuration of a particular cloud server ● ● ● ●

basic server full server application server off-the-shelf server.

While the cloud can introduce essential automation to IT systems, there may be certain aspects of cloud deployment that one may prefer to oversee manually, as pointed out by Szynaka. Cloud templates leverage scripting tools to generate individual server configurations that can establish benchmarks for cloud applications. Positioned between Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), these templates are under the ownership of the business and empower IT teams to oversee the setup of a specific cloud server. In the realm of cloud templates, widely recognized open-source tools include Puppet from Puppet Labs and Chef from Opscode, along with alternatives like RightScale Cloud Management and Racemi CloudPath. These tools streamline the scripting process, effectively serving as interfaces for users. Szynaka aptly describes cloud templates as descendants of enterprise standard configurations, expanding the choices for standard configurations in the cloud. For instance, in the era preceding widespread cloud adoption, when a department requested a server for a new network management application, IT administrators might have hesitated due to it not aligning with the standard configurations. Nowadays, cloud templates facilitate the creation and preservation of such specific server configurations. As an organization's reliance on cloud computing, particularly IaaS, continues to grow, these templates emerge as indispensable tools."They become important when you get above 50 or 60 servers in the cloud," Szynaka said. "These tools allow you to keep control of these images. IaaS wasn't meant to be reader-friendly," and templating allows users to give servers individual names, rather than the IaaSissued ones. Each comes with varying features and levels of complexity and support; for example, the basic is a simple machine image with an operating system, while the off-the-shelf edition is preconfigured with application © Secbay Press

33 | Page

Secbay Press support available. (Those off-the-shelf options are available in the Amazon Web Services Marketplace.)

1.2.3 Licensing •

Per User



Socket Based



Volume Based



Core Based



Subscription Based

Managing hardware resources can be less of a challenge than managing license agreements. Successfully managing software license agreements in a virtual environment is a tricky proposition. The software application must support licensing a virtual instance of the application. So me software vendors still require the use of a dongle or a hardware key when licensing their software. Others have adopted their licensing agreements to coexist with a virtual environment. A guest requires a license to operate just as a physical server does. Some vendors have moved to a per-CPU-core type of license agreement to adapt to virtualization. No matter if the application is installed on a physical server or a virtual server, it still requires a license. Organizations have invested heavily in software licenses. Moving to the cloud does not always mean that those licenses are lost. Consider "Bring Your Own License" (BYOL) as an illustration; this feature, designed for Azure migrations, enables the seamless migration of existing supported licenses to Azure. This approach prevents companies from incurring duplicate licensing costs. In addition, software assurance, accompanied by license mobility, extends the flexibility to transfer licenses to alternative cloud platforms like Amazon Web Services (AWS) or VMware vCloud. Among the various cloud delivery models, infrastructure as a service (IaaS) and platform as a service (PaaS) present the most intricate challenges concerning software licensing. In contrast, software as a service (SaaS) tends to be less problematic since, as the name implies, the software is an integral part of the cloud provider's services. In the case of IaaS and PaaS, customers share control over what runs within the cloud environment, encompassing third-party software. In IaaS, customers do not manage or oversee the underlying cloud infrastructure but may retain control over operating systems and deployed applications. On the other hand, with PaaS, although customers usually lack control over the operating system, they may exercise control over deployed applications. Per user The three basic license types are per-user, per-device and "enterprise." Paying per user is a triedand-true method wherein a user is granted a license to use the application or server. This is subdivided into concurrent users and total users. A concurrent-user license simply means that you are licensed up to x number of users simultaneously. You can have 25 concurrent licenses and 2,500 users, but as long as only 25 people are using the system at one time, all is well. If the license is based on total users, 25,000 licenses are needed

© Secbay Press

34 | Page

Secbay Press Socket Based/Core Based For the purposes of licensing Oracle programs in the cloud environment, customers are required to count each virtual core as equivalent to a physical core. This policy applies to all programs available on a processor metric. "When licensing Oracle programs with Standard Edition One or Standard Edition in the product name, the pricing is based on the size of the EC2 instances. EC2 instances with four or less virtual cores are counted as one socket, which is considered equivalent to a processor license. For EC2 instances with more than four virtual cores, For every set of four virtual cores utilized (rounded up to the nearest multiple of four), it corresponds to a licensing obligation for one socket. Volume Based Licensing In the realm of software licensing, volume licensing refers to the practice of issuing a license that grants permission for a single computer program to be employed across a substantial number of computers or by a significant user base. For instance, Microsoft's volume-licensing programs encompass software such as Microsoft Windows and Microsoft Office. Office 365 Office 365 represents Microsoft's comprehensive and secure assortment of cloud-based productivity services explicitly crafted for cloud computing. It is made available in suites tailored for organizations of all sizes. These suites encompass offerings such as Office 365 Pro Plus, which delivers the complete downloadable Office client alongside a suite of web-based applications (including Word, Excel, and PowerPoint). Additionally, it provides business-grade email, file sharing capabilities, solutions for meetings and communication, and an internal collaborative social network. Each suite places a strong emphasis on safeguarding your data against potential threats by addressing security requirements at various levels. Office 365 extends accessibility to productivity tools from any location, offers straightforward IT management solutions, and is supported by a financially guaranteed Service Level Agreement. Subscription Based Within a subscription-oriented model, cloud users commonly make an initial payment before gaining entry to cloud services. The pricing structure frequently depends on the duration of the subscription, with extended subscriptions often leading to reduced costs.

1.2.4 User density Consider a scenario in which you need to provide virtual computers for your workers to use for them to do their tasks. "How many virtual machines do I need to deploy?" you might wonder. When it comes to cloud implementation, this is a crucial subject to consider. The density of users is a crucial factor in deciding the answer to this question. Consider user density to be a measurement of how many people can access a particular system or resource without it being overburdened or underutilized. Because this is a crucial computation, calculating the appropriate user density necessitates some trial and error as well as some guessing. Because users might be unexpected, it's difficult to forecast exactly how they'll use a resource without making informed estimates. There are

© Secbay Press

35 | Page

Secbay Press tools available to assist with these computations, such as system load information and trend analysis tools.

1.2.5 System load The phrase "system load" describes how busy a resource or its components are. If the resource is a database, for example, system load might refer to the database's average read and write operations. System load statistics change depending on the resource and should be tracked over time. The system load offers an excellent insight into the performance of your resources, which is important for capacity planning. A rapid increase in system load is usually seen as an anomaly, but a persistent or normal surge over a period of time, such as a day or week, may suggest that additional resources are required. Trend analysis is used with system load information to assess whether extra (or perhaps fewer) resources are needed for optimal resource performance.

1.2.6 Trend analysis Consider the following scenario: To manage online traffic from consumers and clients, your company now has three web servers. Some consumers have recently complained that your web servers are sluggish and unresponsive at times. You must establish if this is a problem with your web servers or another one (network, customer-side problems, and so on). This approach can be aided by using trend analysis. You'll be able to evaluate if you need to add additional resources or, in certain situations, pull back and lower the quantity of resources based on data gathered over time. You'll need to know about baselines, patterns, and anomalies in order to execute trend analysis. Baselines A baseline is statistical data that shows the system demand of a resource under "normal" conditions. When a resource is first deployed, a baseline is normally established, and further baselines are created as your environment changes (such as when you add additional resources or increase the capabilities of a resource). To evaluate if a resource is overtaxed or underused, baseline values are compared to future system demands. Patterns A pattern can aid in capacity planning decisions that are both wise and adaptable. Consider the case where your organization releases a new version of your software product every six months. You've found that when a new version is launched, there's an increase in traffic to your web servers based on past baselines. As a result, you may prepare for the rise by raising the number of web services available before to the introduction of the new software version. You can also reduce the number of web servers a week following the software release, which is when web server traffic normally returns to normal, according to trends.

© Secbay Press

36 | Page

Secbay Press Anomalies When statistical data deviates from the standard or norm, it is called an anomaly. These anomalies can be beneficial in some circumstances, but they can also be problematic if they are not fully understood. Consider the case of a sudden increase in traffic to your web server. This anomaly might be the consequence of hacking efforts against your firm or a surge in demand for your products and services. The cause of the anomaly must be determined because the reaction to the anomaly (if one exists) must be carefully evaluated.

1.2.7 Performance capacity planning The goal of performance capacity planning is to figure out how many resources are available based on performance. In other words, for the performance to be acceptable to the customer, your resources must fulfill the demands of internal or external consumers. The system load data and trend analysis reported in earlier sections of this chapter are used in performance capacity planning. Remember that this is a continuous process, not something you complete once and then forget about. ●

Application performance



Bandwidth



Caching



Database performance



File system performance



Inter-availability zone performance



Network latency



Scalability



Storage I/O operations per second



Storage read/write performance



Swap file utilization

Just because you have created an operational cloud deployment doesn't mean your work is over! You must continually monitor performance and make sure there are no interruptions to services. Fortunately, this function has largely been automated. As you will learn in later chapters, events can be collected and analyzed with monitoring applications. Trends and acceptable performance metrics can be defined. Should these be exceeded, events can be generated by the management systems to run scripts for many desired responses to the event. Also, alerts such as text messages, e-mails, or calls to other applicators can be defined and deployed.

© Secbay Press

37 | Page

Secbay Press

The use of variances also assists in the automation of the cloud. For example, you can use the CPU utilization example discussed earlier to alert the cloud automation servers to add additional virtual CPUs to a VM if utilization has, for example, exceeded 98 percent for more than 10 minutes. This of course can be applied to storage, networking, and all servers and applications that are automated. These automated responses to events make cloud operations more efficient, responsive, and resilient than traditional data centers. The cloud customer can set predefined thresholds with the cloud provider. When these thresholds are exceeded, the cloud provider can use automation applications to add capacity that has been agreed upon and contracted for. Another example may be a company that is hosting its corporate e-commerce website on the cloud. If the company were to run a very successful advertising campaign for its products, the traffic and load on the web services may spike. With this known variance from the baseline, the cloud provider's automation systems may automatically add web compute resources dynamically to handle the additional workload. Understanding Cloud Performance Cloud performance encompasses the collective capabilities of diverse components and their interactions. The level of performance attainable in your deployment results from a fusion of the cloud service provider's capabilities and architecture, coupled with your operational design and implementation. A wellstructured framework encompasses a wide array of components, including, but not limited to, the following: ●

Application performance



Bandwidth



Caching



Database performance



File system performance



Inter-availability zone performance



Network latency



Scalability



Storage I/O operations per second

© Secbay Press

38 | Page

Secbay Press ●

Storage read/write performance



Swap file utilization

Ongoing network monitoring and management allow you to measure and view an almost unlimited number of cloud objects. If any parameter extends beyond your predefined boundaries, alarms can be generated to alert operations and even to run automated scripts to remedy the issue Variations in the Cloud: Performance Metrics The assessments and benchmark testing are intended to allow you to collect statistics and measurements of your operations in various states. For example, what is the CPU utilization under low, normal, and heavy workloads? How many storage I/O operations are occurring during the day compared to overnight? By constantly monitoring and measuring your operations, you can assign alarms when a metric exceeds the thresholds you define. There are an almost infinite number of measurement points in the cloud it seems. You can select the measurements that are important to you and define the metrics of normal and abnormal operations. Based on these triggers, the alarms can be created to alert users via text or e-mail or, more commonly, another application that will run a script to take action on the alarm.

1.3 Explain the importance of high availability and scaling in cloud environments. 1.3.1 Hypervisors Virtual machines will be used by some of the most prevalent cloud-based resources (VMs). An operating system that shares hardware with other operating systems is known as a virtual computer. This sharing would often be problematic since operating systems are meant to handle hardware devices directly and exclusively, which would cause issues if two operating systems attempted to manage hardware devices at the same time. The introduction of the hypervisor solves this problem. Virtual hardware devices are presented by the hypervisor to operating systems that are executed as virtual machines. Depending on how the hypervisor is configured, each virtual machine is given a virtual CPU (vCPU), a virtual hard drive, a virtual network interface, and other virtual devices. So, who or what oversees the actual (physical) hardware? The solution is dependent on the hypervisor type. A Type 1 hypervisor runs directly on the system hardware and functions as an operating system in its own right. This hypervisor supervises the hardware directly, and it performs better than a Type 2 hypervisor. A Type 2 hypervisor operates on a machine with a host operating system already installed. Although the Type 2 hypervisor collaborates with the host operating system to manage physical hardware, the host operating system retains complete control of the hardware. Type 2 hypervisors are not often employed in cloud computing systems due to the overhead produced by the host operating system.

© Secbay Press

39 | Page

Secbay Press Affinity While affinity is most linked with virtual machine technology, it may also be used to other cloud technologies. Consider affinity for virtual machines to keep virtual machines together, frequently on the same network or even hypervisor. Most cloud environments are huge, with thousands of compute nodes (a fancy way of saying a computing device or a server). Many of these computing nodes will be physically close to one another, such as in the same data center or rack. Others will be dispersed over many data centers or perhaps separate buildings. When two virtual machines need to interact properly, this design might cause issues. Anti-affinity Anti-affinity is the opposite of affinity, as you might think. This phrase literally means "keep these things apart." Anti-affinity can refer to "keeping these virtual machines in distinct server rooms" or even "keeping these virtual machines in various geographic locations" in terms of virtual machines. Consider the case of a virtual machine that hosts a database to better understand the purpose of antiaffinity. This server is on the East Coast of the United States, and you need a backup server to guarantee that you don't lose any data if the data center where the database is housed suffers a disaster. Because it wouldn't be a good idea to keep the backup database on the East Coast geographic region in this scenario, you'd use an anti-affinity rule to store the backup database somewhere else.

1.3.2 Oversubscription If you've ever traveled by plane, you've undoubtedly heard announcements like this one: "We're searching for passengers who are willing to accept a later trip in return for a future travel voucher." This is the outcome of an airline practice known as overbooking, in which the airline books more customers than seats because some passengers have a history of canceling flights at the last minute. This booking strategy allows an airline to make the most of available seats, even if it does occasionally require some passengers to be transferred to other flights. A similar strategy will be used by cloud providers. When a compute node is used to host various resources (virtual machines, databases, and so on), cloud providers will provide more hardware resources than is physically practical to offer to maximize the compute node's income. Cloud providers are simply wagering that customers will not use all the actual or virtual hardware resources supplied for a cloud instance, hence the compute node's resources will be "overbooked." This is referred to as oversubscribing. Oversubscription can occur in any physical or virtual resource, but the most prevalent three are computational power (also known as CPUs), network, and storage.

© Secbay Press

40 | Page

Secbay Press Compute The computing portion of a cloud resource consists of numerous components, including ● ● ● ● ●

Images Physical CPUs Images Number of processors Images The number of cores Images Threads (also known as hyper threading) Images vCPUs (virtual CPUs)

When you construct a virtual computer, you may pick these values. For example, when building a virtual machine on Amazon Web Services, you may select the number of CPUs, threads, and vCPUs, as shown in Figure.

Figure 5 CPU Compute Options

Network Cloud providers understand that not all their customers' resources will be available at the same time. They will frequently oversubscribe network bandwidth because of this. When you build a Relational Database Service (RDS) on AWS, for example, you must select the instance class, which contains the database instance's bandwidth, as shown in Figure.

Figure 6 Database Class

© Secbay Press

41 | Page

Secbay Press The database instance will be given a maximum of 4,750 Mbps, as shown in the diagram. If you summed up all of the assigned bandwidth for all of the instances hosted on the physical system, you'd probably find that the total allocated bandwidth surpasses the real bandwidth available. Storage Cloud companies frequently oversubscribe storage as well, although this one is a little difficult. A resource is frequently assigned two storage components: The storage size: This is the amount of space that the resource can take up. IOPS (input/output operations per second) refers to the amount of data that may be read from or written to a storage device in one second. Cloud providers understand that, like CPU and network bandwidth, it is unusual for all cloud users to utilize 100% of the available shared storage space. Furthermore, it is unlikely that all clients sharing storage space will seek to employ the maximum IOPS. The cloud provider can therefore safely oversubscribe the storage allocation.

1.3.3 Regions and zones Extensive cloud operations typically segment their activities into regions to enhance fault tolerance and provide localized performance benefits. A region, in this context, doesn't refer to a single, massive data center but rather denotes a distinct geographical presence. For instance, within the Asia Pacific region, they might establish regions in both Sydney and Tokyo. Similarly, in Europe, regions like London and Oslo could exist. In North America, regions might be in cities such as Boston, Ottawa, Austin, and San Jose. These regions maintain interconnectivity among themselves and with the internet through highspeed optical networks. However, they operate in isolation from each other, ensuring that an outage in one region does not impact the operations of others. When you deploy your cloud operations, you will be given a choice of what region you want to use. Also, for a global presence and to reduce network delays, the cloud customer can choose to replicate operations in multiple regions around the world. Also, understanding regions are important for many regulatory and compliance issues that require data to be stored in its country of origin.

© Secbay Press

42 | Page

Secbay Press

Figure 7 Microsoft Azure Regions

Availability Zones you just learned that a region is not an actual data center but defines a geographical area of presence. The actual data centers in each region are referred to as availability zones (AZs). Each region will usually have two or more availability zones for fault tolerance. The AZs are isolated locations within the cloud data center regions that the public cloud services providers originate and operate. Each availability zone is a physically separate data center with its own redundant power and telecommunication connections. If they are in a geologically sensitive area, such as along a fault line or in a coastal area that may be subject to hurricanes or typhoons, they will be located a greater distance apart to minimize a natural disaster affecting more than one availability zone. Figure 1.21 illustrates the concept of availability zones. Delivering High Availability Operations By implementing a well-architected network using best design practices and by selecting a capable cloud service provider, you can achieve high availability operations. There is a shared responsibility between the cloud provider and the consumer to achieve high availability in the consumer's operations. The cloud regions and availability zones must be considered when deploying the consumer's cloud solution to eliminate any single point of failure; Load balancing and auto scaling are used as well as database and storage replications. The cloud provider must engineer its data centers for redundant power, cooling, and network systems and create an architecture for rapid failover if a data center goes offline for whatever reason. It is the responsibility of you, the cloud customer, to engineer and deploy your applications with the appropriate levels of availability based on your requirements and budgetary constraints.

© Secbay Press

43 | Page

Secbay Press

1.3.4 Applications A cloud application is software that runs on the internet. For example, you may have a bespoke application that manages payroll payments for all of your company's employees. The software that accomplishes this operation may be housed in the cloud. As you may expect, certain cloud applications are mission essential, making high availability a vital aspect. Furthermore, because the demand for these apps might grow over time, it's critical to be able to scale them to improve their performance. For example, when your firm had 500 workers, the resources provided to the payroll application may have been sufficient, but now that your company has grown to 6,000 employees, the program is no longer able to keep up. The program will be able to manage the extra workload when you grow it.

1.3.5 Containers Containers are comparable to virtual machines in certain aspects, but they are not whole operating systems; instead, they are purpose-built computing components. They are far more "lightweight" than virtual machines in that they do not consume as many resources (vCPU, RAM, storage, and so on). In some circumstances, they may be more suitable than virtual machines in terms of high availability and scaling for the following reasons: ●

Since high availability typically necessitates the usage of multiple instances, there is an additional expense associated with highly available systems. Because full virtual machines demand more physical resources than containers, they are more costly.



Scaling frequently necessitates the creation of a new instance of a resource. When building a huge resource, such as a virtual computer, this procedure might take a long time. Containers can be built significantly faster, making them better suited for scalability.

1.3.6 Clusters A cluster is a group of instances that do the same task. You might, for example, set up several virtual machines, each of which runs the same web server. Clusters have two main characteristics. They guarantee that the resource is highly available by ensuring that if one server in the cluster fails to respond to requests, another server in the cluster can. Additional instances may be added to the cluster as demand develops, making the cluster more scalable.

© Secbay Press

44 | Page

Secbay Press

Figure 8 Clusters

1.3.7 High availability of network functions High availability is mostly determined by network components. Consider the function of each of the components listed below. Switches Switches are devices that are connected to the same physical network and are used to route network traffic between them. To guarantee that network traffic is transmitted to the proper device, switches often link and IP address with a MAC address. Open Systems Interconnection layer 2 is where switches function. Only the port to which the target device is attached will receive network traffic from the switch. High availability may be specified in modern switches. This is accomplished by having the switch keep track of the activities of the device connected to the network port. If a device fails to reply, the switch can route network traffic to another port where another device is prepared to fulfill the request. It's worth noting that in a cloud context, the cloud vendor oversees configuring this function. In the cloud, the client has no access to any layer 2 functions. Routers Routers connect networks that are similar but not identical. This is often accomplished by routing network traffic meant for devices outside of the local network through a switch. Routers are OSI layer 3 devices. When a router becomes unresponsive, this system has a problem. This lack of reaction has the potential to bring all network traffic between networks to a standstill. One approach to avoid this problem is to set up one or more backup routers, so that if one router fails, switches may reroute network traffic to another router. As a result of this method, network traffic is highly available. Various technologies, such as the Virtual Router Redundancy Protocol (VRRP), Gateway Load Balancing Protocol (GLBP), and Hot Standby Router Protocol (HSRP), can be used to provide high availability.

© Secbay Press

45 | Page

Secbay Press Load balancers Consider the scenario described above, in which you have many servers with web servers set to serve your company's web pages. "How do the client web apps know which web server to contact?" you might wonder. A load balancer is the answer. To see how the load balancer accomplishes its goal, consider Figure, which depicts the steps a web client would take to connect to a web server. The web client must first discover the web server's IP address (private IPs are used in this example), which is retrieved by a DNS search. The web client then uses the relevant IP address to make the request to the web server.

Figure 9 Single Web Server

When a load balancer is utilized, the DNS server is set to report the load balancer's IP address. The load balancer then passes the incoming client request to one of the web servers that have been configured to use the load balancer. Figure 10 illustrates this strategy.

© Secbay Press

46 | Page

Secbay Press

Figure 10 Multiple Web Servers with Load Balancing

There are several distinct forms of load balancing in cloud computing. The methods offered are mostly determined by the cloud provider. The following are the most prevalent methods: Round robin: The load balancer distributes requests evenly among the servers in a predetermined order. Weighted: Each server is assigned a numerical value based on its ability to react. A server with more vCPUs, RAM, and bandwidth, for example, would be given greater weight. More requests are sent to servers with higher weight values by the load balancer. Geographically based: The load balancer routes the client's request to the server that is geographically nearest to the client. Firewalls Because the aim of a firewall is to simply allow or restrict traffic to a network, it might have an influence on high availability. If a network resource is inaccessible, it might be because a firewall rule has prohibited access to that resource or because the firewall is down due to a fault (fail close configuration).

1.3.8 Avoid single points of failure To achieve high availability, single points of failure must be avoided. It's difficult to avoid them because so many factors might affect a resource's availability. While having only one resource is obviously a single point of failure, having only one network path to reach the resource is also a single point of failure. You should map out how the resource is accessed and the components necessary for the resource to operate to avoid single points of failure. Another example: you may have numerous web servers and © Secbay Press

47 | Page

Secbay Press different network ways to reach them, but the servers rely on a database, which you now only have one instance of. This single point of failure might cause the web server resource to fail as well if the database becomes unavailable.

1.3.9 Scalability Scalability in cloud computing refers to a resource's capacity to adapt to changing needs. This is a crucial feature, and it's frequently cited as one of the most compelling reasons to move on-premises systems to the cloud. Scaling may be accomplished in a variety of ways, including auto-scaling, horizontal scaling, vertical scaling, and cloud bursting. Auto-scaling When the scaling process is automated, it is known as autoscaling. For example, you may have a system in place that automatically starts a second resource to handle extra requests if the existing resource hits a certain threshold, such as a certain number of client connections. Consider the case where you need to start a new web server anytime one of your existing web servers has more than 50 client connections. Assume that you now have five web servers, and that the client connections eventually stop working. As a result, you will no longer require five web servers. This is a circumstance where elasticity can help. When using a cloud solution with elasticity, those extra web servers may be decommissioned when they are no longer needed. Consider elasticity to include the capacity to scale up to satisfy demand as well as the ability to scale down when demand is no longer existent. Horizontal scaling Horizontal scaling refers to the process of adding more web servers to fulfill rising demand. More resources are offered on the fly to satisfy demand using horizontal scaling. This strategy is less obtrusive. Vertically Scaling Vertical scaling does not assign more resources; instead, the current resource is beefed up to accommodate the demand. This often implies that the resource is given extra hardware, such as more vCPUs, RAM, or storage capacity, in cloud computing. Because the instance may need to be shut down to change the resource configuration, this might be a time-consuming operation. Cloud bursting Consider the following scenario: your company has already invested significantly in its own private cloud infrastructure. This means you're providing cloud-based capabilities with on-premises resources. However, there is some fear in your company that the resources available in your private cloud will not be able to meet demand at peak periods. © Secbay Press

48 | Page

Secbay Press Your organization, for example, organizes an online conference every year, and the use of your web servers and video hosting software skyrockets. You want to be able to accommodate this demand without having to invest a lot of money on gear that will sit idle for the rest of the year. The answer is to use cloud bursting.

1.4 Given a scenario, analyze the solution design in support of the business requirements. 1.4.1 Requirement analysis Any company looking to implement a new system, program, or tool will almost certainly do a requirement study beforehand. This critical procedure ensures that the deployment fulfills all of the user base's and organization's requirements. Requirement’s analysis can be done in a variety of ways, but it usually involves four basic sorts of activities: Internal users are polled to identify the organization's requirements during this activity. Analyzing requirements: The results of the requirements collecting process are not always clear or straightforward. The data obtained during the requirements collection activity is examined during the requirement analysis phase, and any discrepancies are rectified. Requirements modeling: In this activity, the data obtained in the preceding two activities is transformed into deployable solutions. After the systems have been implemented, various teams will conduct an evaluation of the procedures and issues, and then discuss any findings. These discoveries might be utilized to improve the technique in the future. These are also referred to as "learnings."

© Secbay Press

49 | Page

Secbay Press Software When doing a requirement analysis, one of the factors to examine is the software you will use in the cloud. This decision will be influenced by, among other things, the sort of cloud environment you choose. The following are a few that you should think about: ● ● ●

SaaS PaaS IaaS

© Secbay Press

50 | Page

Secbay Press Hardware If you're utilizing a SaaS or PaaS service, you probably don't have much of an option in terms of hardware. When adopting an IaaS solution, however, hardware becomes a significant part of the requirement analysis. When you pick an IaaS solution, most cloud suppliers offer several tiers or degrees of hardware. The higher the expense of the solution, the more hardware resources it requires. Because cloud companies want to give as many alternatives as possible to their consumers, the available options may be daunting. Figure, for example, shows a quick overview of the various EC2 instance kinds. It's worth noting that AWS has multiple categories (General Purpose, Compute Optimized, Memory Optimized, and so on), each with a variety of instance kinds. General Purpose, for example, offers 11 different instance types, including Mac, T4g, T3, and T2. If the software solution you're installing has documentation, you may use it to get a broad notion of what hardware is required to meet the product's requirements. Benchmarking tests, on the other hand, are the best way to figure out which instance type is best for you. This would include putting the program through its paces on several instance types under a range of load conditions in order to discover the ideal instance type. Note that there are several factors to consider when choosing a hardware instance type, including the following: ● ● ● ● ●

Type of physical CPU Number of vCPUs Amount of RAM Type, speed, location, and size of the storage Network bandwidth

Integration When you're doing a requirement analysis for a cloud-based solution, there's a strong chance you'll want it to work with other components in your IT architecture. The idea of integration is to figure out how to make multiple solutions work together as well as possible. Budgetary Perhaps one day you will work for an organization that believes "pricing isn't a concern," but for most of us, that day is unlikely to happen. When conducting a requirement analysis, it's critical to look at a variety of options and pick one that fits within the budget allotted for the project. Obviously, researching a variety of options is not always practicable, and you should be prepared to justify greater expenses or suggest alternatives that may not fulfill all of the situation's criteria. Even if you provide a solution that satisfies the situation's demands and the budget, you may be requested to justify the expenses and make recommendations for a more cost-effective solution, or, in other words, to talk about the total cost of operations (TCO) and return on investment (ROI) (ROI).

© Secbay Press

51 | Page

Secbay Press Compliance Compliance must be factored into the solution if your company is required to respect third-party regulations, such as those imposed by the government or other regulatory bodies. Ensure that you are aware of any compliance standards while doing requirement analysis and that they are addressed correctly when building the solution. Either your firm's compliance officer or the regulatory agency will most likely want a report that certifies that the solution complies with the requirements. Service-level agreement (SLA) An SLA protects both the cloud vendor and the client by explicitly stating the levels of service the cloud vendor will offer within the limits. For items such as the following, the SLA often outlines the minimal standards that the client should expect: ● ● ●

Availability Speed Responsiveness

In most cases, the SLA spells out what the cloud vendor is accountable for, as well as what the client is liable for. Data ownership, disaster recovery, and specifics on the cloud vendor's hardware are all subjects that may be included in a SLA. User and business needs Remember that users are polled during requirements gathering to identify the organization's requirements. Requirement analysis is driven by these requirements. Security If you've been following news stories about major corporations' security breaches over the last decade or so, you're probably aware that any cloud solution must include a robust security environment. You should perform some testing before (and even after) implementing a solution to ensure that it is not vulnerable to security breaches. Network requirements When it comes to requirement analysis, network needs may become a huge subject. To establish where and how to design the network environment for your solution, you may need to examine where your users live geographically. When doing a requirement analysis for the Cloud+ test, you should concentrate on three network elements: sizing, subnetting, and routing. Sizing The bandwidth available in the network affects network size, but bandwidth isn't the only factor to consider. If you have numerous instances in a network, they will compete for bandwidth, thus you should evaluate how much bandwidth your network requires collectively.

© Secbay Press

52 | Page

Secbay Press Subnetting Occasionally, you'll want to make sure that numerous connected instances can interact without having to leave the network where they're running. This is due to subnetting, which is the process of determining the maximum number of hosts that may be connected to a network. Routing The process of transmitting network packets from one network to another is known as routing. Because firewalls may be configured via routing points to allow or prohibit the transport of certain network packets, this is a security concern while undertaking requirement analysis.

1.4.2 Environments Another important factor to consider while conducting a requirement analysis is which environments you want to construct. In your solution, each environment will play a distinct function. Not all surroundings will be used in every scenario, and each environment has advantages and disadvantages that must be evaluated against the cost (typically budgetary cost, but there are other costs, such as manpower and maintenance costs). Development In a development environment, you create new software or alter software that your company has already created. While the development environment may be used to prepare for changes in the production environment, its primary function is to allow software developers to work in a safe environment where no live work would be affected. Quality assurance (QA) Before transferring modifications to a production environment, testing is done in a QA environment. The QA environment should first resemble the existing production environment. New features and configurations are tested in the QA environment to verify that they suit the demands of the users and the business. Staging To imitate the production environment, some companies use a staging environment. This staging environment may be used for a variety of purposes, including predicting possible difficulties in the production environment and serving as a stand-in if the production environment fails or becomes compromised. It's also utilized while using a blue-green strategy (see the next section for more details). Blue-green When you use a blue-green deployment, you have two environments that are identical (production and staging). Within your company, the production environment is active and in use. In the final stages of delivering a new version of the solution, the staging area is employed. This implies that modifications made in your QA environment are replicated in the staging environment, as well as certain final tests. The staging environment is changed to the production environment, and the production environment is now viewed as the staging environment, if all tests have passed successfully. If the solution continues to © Secbay Press

53 | Page

Secbay Press work as expected, adjustments are made to the original staging environment before being applied to the new staging environment. As a result, the two are once again identical. Because one environment is normally designated blue and the other is traditionally labeled green, this is known as a blue-green deployment. At any given time, the production or staging environment can be either blue or green. Smoother updates, fewer downtime, and the flexibility to swiftly roll back a deployment to a previously operating environment are all advantages of this strategy. The increased expenditures and effort required to manage two settings are the system's drawbacks. Production Your organization's live environment is referred to as the production environment. Recovering from a disaster (DR) If the production environment is compromised, a DR environment is employed. While a staging area can be utilized for disaster recovery on occasion, it is not an ideal DR solution since the staging environment will not always be similar to the production environment following a fresh deployment. A disaster recovery environment (DR) is a near-identical clone of the production environment that serves a single purpose: to quickly restore a damaged environment. The DR environment should typically be placed in a different geographic area than the production environment, so that a physical disaster does not affect both.

1.4.3 Testing techniques Another important aspect of requirement analysis is verifying that the solution created meets the organization's requirements. This necessitates the execution of multiple distinct sorts of tests. Vulnerability testing Any approach has the possibility of containing security flaws. The purpose of vulnerability testing is to find these security flaws/holes and fix them before the software/solution goes live. This test can be carried out manually or with the use of industry-standard tools like those supplied by the Open Web Application Security Project (OWASP) and the Web Application Security Consortium (WASC).

© Secbay Press

54 | Page

Secbay Press Penetration testing Penetration testing, alternatively referred to as pen testing or ethical hacking, involves the examination of a system or network to identify security weaknesses. The primary objective is to detect vulnerabilities that potential attackers could leverage before the system or network is deployed. To assist in this endeavor, the utilization of pen testing software, including the following, can be beneficial: nmap: A port scanning tool Wireshark: A network sniffer that can capture and display network traffic Metasploit: A security testing automation framework Performance testing Performance testing is a method of determining how responsive a solution is when a specific amount of work (referred to as a workload) is applied to it. This testing should be able to identify weak spots or bottlenecks in a solution, as well as test all main components (hardware, software, network, and so on). Regression testing When making changes to software, it's critical to undertake regression testing to guarantee that the changes are functional. These tests are commonly carried out in development and quality assurance contexts. Functional testing Consider getting a piece of software that allows you to modify documents. This program should be able to accomplish a variety of things, such as load a document, save a document, format a document, and so on. You can undertake functional testing before deploying this program (or a new version of it) to confirm that all of its functionalities operate properly. Functional testing is carried out in a quality assurance environment. Only one aspect of the program, independent of any other feature, is examined during a single function test. This is also known as blackbox testing since the tester has no knowledge of how the software works and is just concerned with performing the test and determining if it is effective. Usability testing Users control a large number of software packages. This implies a software manufacturer must assure that customers can utilize the program in the way it was designed. Usability testing, in which users are asked to execute tasks using the program, is one technique to check this. After that, the users submit comments on their experiences. This information is used to identify whether the program needs to be tweaked or if it is functioning properly.

© Secbay Press

55 | Page

Secbay Press

Practice Questions QUESTION NO: 1 Which service model in a shared responsibility model places responsibility for middleware on the customer? A. SaaS B. None of these answers are correct C. IaaS D. PaaS

QUESTION NO: 2 How can an organization reduce the risks associated with multitenancy in the public cloud? A. Reserve an entire server. B. Create a single root account. C. Request a secure cloud account. D. None of these answers are correct

QUESTION NO: 3 On a community cloud, _____ allows for the sharing of resources or applications. A. multitenancy B. IaaS C. PaaS D. timesharing

© Secbay Press

56 | Page

Secbay Press QUESTION NO: 4 A ___ cloud is one that uses both a public and a private cloud at the same time. A. joint B. hybrid C. ad hoc D. duplex

QUESTION NO: 5 Which of the following cloud service models does DocuSign represent? A. SaaS B. PaaS C. IaaS D. DaaS

QUESTION NO: 6 Which service model in a shared responsibility model places responsibility for applications on the customer? A. PaaS B. SaaS C. None of these answers are correct D. IaaS

© Secbay Press

57 | Page

Secbay Press QUESTION NO: 7 Which of the following best describes the situation in which multiple organizations share hardware resources on a public/private cloud platform? A. Multiuse B. Multitenancy C. Multiuser D. Multishare

QUESTION NO: 8 Google App Engine is which of the following service models? A. SaaS B. IaaS C. DaaS D. PaaS

QUESTION NO: 9 AWS EC2 is which of the following service models? A. IaaS B. DaaS C. PaaS D. SaaS

© Secbay Press

58 | Page

Secbay Press QUESTION NO: 10 Which of the following is the most likely description of a private cloud infrastructure? A. Google Cloud B. On-premises data center C. Azure D. AWS

QUESTION NO: 11 A _____is essentially a hybrid cloud, but instead of combining a public and a private cloud, this hybrid cloud type combines multiple public cloud environments. A. multicloud B. mergecloud C. combocloud D. cloud with a cloud

QUESTION NO: 12 ____ analysis, in conjunction with system load statistics, is used to determine whether additional (or even fewer) resources are required for optimal resource performance. A. System B. Runtime C. Usage D. Trend analysis

© Secbay Press

59 | Page

Secbay Press QUESTION NO: 13 Which of the following is not normally taken into account when performing capacity planning? A. Business need analysis B. Location of customers C. Budgetary considerations D. Software needs

QUESTION NO: 14 The speed of your network has an impact on _____, which is an important factor to consider when performing capacity planning. A. Network performance B. RAM C. vCPUs D. GPUs

QUESTION NO: 15 When performing capacity planning, ____ is an important hardware component to consider. It is used to store dynamic data for running applications. A. Network performance B. GPUs C. vCPUs D. RAM

© Secbay Press

60 | Page

Secbay Press QUESTION NO: 16 The ____ are an important hardware component to consider when performing capacity planning because they have a large impact on how images are created on the system. A. vCPUs B. RAM C. Network performance D. GPUs

QUESTION NO: 17 What licensing models are associated with CPU utilization? A. Socket-based B. Volume-based C. Per-user D. Core-based

QUESTION NO: 18 A(n) ____ is statistical data that shows the system load of a resource during "normal" usage. A. baseline B. anomaly C. pattern D. performance capacity planning

© Secbay Press

61 | Page

Secbay Press QUESTION NO: 19 The ____ provides(s) the processing power, which is an important hardware component to consider when performing capacity planning. A. RAM B. network performance C. vCPUs D. GPUs

QUESTION NO: 20 Which licensing model is based on the organization's number of SaaS accounts? A. Volume-based B. Core-based C. Per-user D. Socket-based

QUESTION NO: 21 A(n) _____ is when statistical information deviates from that standard or norm A. performance capacity planning B. baseline C. anomaly D. pattern

QUESTION NO: 22 When more resources are used, which licensing models typically result in lower overall costs? A. Socket-based B. Per-user C. Volume-based D. Core-based © Secbay Press

62 | Page

Secbay Press QUESTION NO: 23 ____ is a method of keeping virtual machines "together," often on the same network or even on the same host hypervisor. A. oversubscription B. domain separation C. anti-affinity D. affinity

QUESTION NO: 24 To maximize revenue for that compute node, cloud providers will allocate more hardware resources than are physically possible to provide with _________. A. oversubscription B. domain separation C. affinity D. anti-affinity

QUESTION NO: 25 It is known as ____ when additional web servers are spun up to handle increased demand. A. vertical scaling B. cloud bursting C. horizontal scaling D. toned scaling

© Secbay Press

63 | Page

Secbay Press

Answers to Practice Questions 1. Answer: C Explanation: The IaaS service model, in a shared responsibility model, places the responsibility for middleware on the customer.

2. Answer: A Explanation: There are methods by which public cloud providers attempt to mitigate multitenancy issues. Many public cloud providers, for example, allow an organization to reserve an entire server for their exclusive use. The remaining answers have nothing to do with multitenancy.

3. Answer: A Explanation: Multitenancy on a community cloud allows for the sharing of resources or applications. A community cloud allows these organizations to collaborate more effectively while still retaining some control over their users, resources, and services. IaaS and PaaS are service models that are not relevant to this topic. Timesharing is not a feature of the community cloud.

4. Answer: B Explanation: A hybrid cloud is one that uses both a public and a private cloud at the same time. The remaining terms do not refer to legitimate cloud types.

5. Answer: A Explanation: The customer uses a program that is hosted remotely in the cloud with SaaS. This is how DocuSign works. When software runs on a PaaS, the solution is given an operating system and, if desired, additional resources such as a database server. IaaS necessitates some level of access to the underlying hardware. DaaS is an abbreviation for Data as a Service and does not apply here.

6. Answer: A,D Explanation: The IaaS and PaaS service models in a shared responsibility model place the responsibility for applications on the customer.

© Secbay Press

64 | Page

Secbay Press 7. Answer: B Explanation: Because hardware resources are shared, there is a good chance that services from other organizations will "live" in the same environment. This is known as multitenancy. The other terms are inappropriate in this context.

8. Answer: D Explanation: When software runs on a PaaS, the solution is given an operating system and, if desired, additional resources such as a database server. This is a feature of Google App Engine. IaaS necessitates some level of access to the underlying hardware. The customer uses a program that is hosted remotely in the cloud with SaaS. DaaS is an abbreviation for Data as a Service and does not apply here.

9. Answer: A Explanation: The service requires access to the underlying hardware with IaaS, which is required for AWS EC2 (Elastic Cloud Compute) instances. When software runs on a PaaS, the solution is given an operating system and, if desired, additional resources such as a database server. The customer uses a program that is hosted remotely in the cloud with SaaS. DaaS is an abbreviation for Data as a Service and does not apply here.

10. Answer: B Explanation: Private cloud infrastructures are typically built in a data center on-premises. The remaining options are public cloud infrastructure platforms.

11. Answer: A Explanation: A multicloud is essentially a hybrid cloud, but instead of combining a public and private cloud, it combines multiple public cloud environments. A Virtual Private Cloud is a cloud within a cloud (VPC). The other terms do not refer to valid cloud types.

12. Answer: D Explanation: Trend analysis, in conjunction with system load statistics, is used to determine whether additional (or even fewer) resources are required for optimal resource performance. The other options are invalid terms.

© Secbay Press

65 | Page

Secbay Press 13. Answer: B Explanation: When performing capacity planning, software requirements, budgetary considerations, and business needs analysis should all be taken into account. Normally, the location of customers has no bearing on capacity planning.

14. Answer: A Explanation: The speed of your network affects network performance. RAM is used to store data for running applications. GPUs have a significant impact on how images are generated on the system. The processing power is provided by vCPUs.

15. Answer: D Explanation: RAM is used to store dynamic data for running applications. GPUs have a significant impact on how images and large computations are handled on the system. A VM's processing power is provided by vCPUs. The speed of your network affects network performance.

16. Answer: D Explanation: GPUs have a significant influence on how images are processed on the system. The processing power is provided by vCPUs. RAM is used to store data for running applications. The speed of your network affects network performance.

17. Answer: A,D Explanation: A socket refers to a CPU socket for licensing purposes, and a core refers to a CPU core. A per-user license charges the client a fee for each user who accesses the resource. A volume-based license offers the client a lower price in exchange for agreeing to use a greater number of resources.

18. Answer: A Explanation: A baseline is statistical data that shows the system load of a resource during "normal" usage. An anomaly occurs when statistical data deviates from the standard or norm. A pattern can assist you in making wise and flexible capacity planning decisions. The goal of performance capacity planning is to determine the number of resources available based on performance.

© Secbay Press

66 | Page

Secbay Press 19. Answer: C Explanation: The processing power is provided by vCPUs. GPUs have a significant impact on how images are generated on the system. RAM is used to store data for running applications. The speed of your network affects network performance.

20. Answer: C Explanation: A per-user license charges the client a fee for each user who accesses the resource; this is commonly used with SaaS. A volume-based license provides the client with a discounted price in exchange for agreeing to use a greater number of resources. A socket refers to a CPU socket, and a core refers to a CPU core for licensing purposes.

21. Answer: C Explanation: An anomaly occurs when statistical data deviates from the standard or norm. A baseline is statistical data that shows the system load of a resource during "normal" usage. A pattern can assist you in making wise and flexible capacity planning decisions. The goal of performance capacity planning is to determine the number of resources available based on performance.

22. Answer: C Explanation: A volume-based license provides the client with a discounted price in exchange for agreeing to use a greater number of resources. A socket refers to a CPU socket for licensing purposes, and a core refers to a CPU core. A per-user license charges the client a fee for each user who accesses the resource.

23. Answer: D Explanation: Consider affinity for virtual machines as a way to keep virtual machines "together," often on the same network or even on the same host hypervisor. The inverse of affinity is anti-affinity. This phrase translates as "keep these things separate." When it comes to virtual machines, this can mean "keep these virtual machines in different server rooms" or "keep these virtual machines in different geographic regions." Oversubscription happens when cloud providers allocate more hardware resources than are physically possible to provide in order to maximize revenue per compute node. Virtual machines are not subject to domain separation.

© Secbay Press

67 | Page

Secbay Press 24. Answer: A Explanation: Oversubscription causes cloud providers to allocate more hardware resources than are physically possible to provide in order to maximize revenue for that compute node. Consider affinity for virtual machines as a way to keep virtual machines "together," often on the same network or even on the same host hypervisor. The inverse of affinity is anti-affinity. This phrase translates as "keep these things separate." When it comes to virtual machines, this can mean "keep these virtual machines in different server rooms" or "keep these virtual machines in different geographic regions." Domain separation is irrelevant to the topic of this question.

25. Answer: C Explanation: Horizontal scaling or scale out refers to the process of spinning up additional web servers to handle increased demand. Vertical scaling does not allocate additional resources; rather, the existing resource is "beefed up" to handle the demand, also known as scaling up. Cloud bursting allows you to configure scaling of private cloud resources in order to use the resources of a cloud provider. Toned scaling is not an acceptable scaling term.

© Secbay Press

68 | Page

Secbay Press

DOMAIN - 2 Security 2.1 Given a scenario, configure identity and access management. 2.1.1 Identification and authorization When a user offers some form of value, such as a username, to indicate who he or she is, this is known as identification. Identification isn't enough to get access to the system; you'll also need to employ the authentication method. When a person validates their identity using another piece of information, such as a password or an access token, this is known as authentication. After a user has been recognized and authenticated, they are given access to the system's resources. The process of authorization occurs when an authorized user is either granted or refused access to resources based on a set of rules. Privileged access management A user is awarded privileged access when they are given permissions that allow them to have elevated access to a resource. In most circumstances, this access is not offered to a typical user account, but rather to persons who require additional administrative authority over a resource. The process of administering privileged access (also known as PAM) is referred to as privileged access management. The idea is to follow the concept of least privilege in terms of security. This entails granting just the level of access required for an individual to do the tasks for which the user is accountable. Logical access management Consider the notion of conventional physical access management (also known as logical access control) to better comprehend logical access management (also known as logical access control). Physical access management includes things like requiring a key to enter a building, a room, or a storage container. A person would require a key for both the storage container and the room to get access to the storage container, which is located in a room. To access the room, the individual must, of course, have a key to the building. As the building's manager, you control access to the things by giving or taking away keys. Account life-cycle management The idea behind account lifecycle management is that mechanisms should be in place to handle each of the user account's key stages. These are the stages: Create: For an account to be established, this step must comprise particular procedures. User information must be collected, account creation must be allowed through a secure mechanism, and account rights must be clearly stated.

© Secbay Press

69 | Page

Secbay Press Updates: A user account may need to be modified in specific instances. If a user is promoted to a new position within the business, procedures must be put in place to either award additional rights or remove those that are no longer needed. Deactivation and deletion: When a person no longer requires an account for example, when the individual quits the company a method for deactivating or deleting the account must be in place. Many businesses prefer to deactivate accounts for security reasons, however there may be a need to evaluate the account at a later time. Furthermore, because a new user account may require the same account type as the one being deactivated, deleting the old account is not recommended because the previous account might be used as a template to create the new account. Provision and deprovision accounts Creating or activating an account is referred to as provisioning. Deprovisioning an account is the same as deactivating or deleting it. Access controls User accounts can acquire access to a resource in your cloud architecture via access restrictions. You should be aware of the many types of access restrictions available, including role-based, discretionary, nondiscretionary, obligatory, and rule-based. The remainder of this section delves into these various sorts of access restrictions. Role-based Users are allocated a role under role-based access control (often referred to as RBAC), which determines what they may accomplish. A role is usually modeled after an organizational role. For example, you might create a role called "database administrator" that grants an account the ability to execute administrative operations on the organization's databases. Discretionary The task of safeguarding a resource is delegated to the user rather than the administrator using discretionary access control (DAC). It also has something to do with the identification of the user account and the groups to which the user account belongs. A user account owns a resource (usually referred to as an object when discussing DAC), and the user account selects who can access the item. The way file permissions function on Linux systems is a good illustration of DAC. Each file is held by a user account and has three permission levels: the file's user owner, the file's group owner, and all other user accounts. Any of these three sets of permissions can be changed by the file's owner to allow or prevent access to the file.

© Secbay Press

70 | Page

Secbay Press Non-discretionary Nondiscretionary access control entails establishing rules that govern which user accounts have access to which resources. These rules can be linked to not just the user account, but also to additional details such as when and from where the user tries to get access. To put it another way, a nondiscretionary rule would say that a user can only access a database between the hours of 9 a.m. and 5 p.m., Monday through Friday, and only when signed in to a specified network. Mandatory One of the most severe types of nondiscretionary access control is mandatory access control. A policy is used in MAC systems to determine which accounts have access to which resources. This policy establishes the identities, resources, and rules that determine which identities are permitted to access which resources.

2.1.2 Directory services Directory services are meant to hold information about an organization or system in their most basic form. The Domain Name System (DNS), which holds IP numbers to host name mappings, is a basic yet effective example of a directory service. A DNS server can be asked to search up the relevant information in its directory of information when a user needs to reach a system using an IP address but only knows the host name. Although DNS has nothing to do with identity and access management, it does serve as a simple example of a directory service and highlights some of its advantages. A directory services other features usually include the following: ● ● ● ● ● ●

To be able to segregate distinct groups of data, a hierarchical naming paradigm is used. Search capabilities that are robust enough to allow for searching of things inside the directory utilizing various forms of searches. Because the data is distributed, it does not need to be kept on a single server. To avoid a single point of failure, the capacity to duplicate data across numerous servers is required. Because data is read far more frequently in a directory service than is written, it should be kept in a fashion that maximizes reading rather than writing. An extensible schema is a component of a directory service that defines the types of data that may be put there.

Lightweight directory access protocol (LDAP) LDAP is a protocol that directory services, which are commonly found on Linux-based systems, utilize to store enterprise-wide data such as user account data. LDAP is one of the most widely used directory service protocols for identification and access management. OpenLDAP and Red Hat Directory Service are two typical LDAP directory services for Linux systems.

© Secbay Press

71 | Page

Secbay Press Microsoft's Active Directory (AD) software uses LDAP as a fundamental component. The protocol is used by AD to enable controlled access to the data it stores.

2.1.3 Federation You may have visited a company's website and been given several options for getting into the system. The login interface for developer.cisco.com, for example, is shown in Figure.

Figure 13 Federation Login Example

You may log in to developer.cisco.com using a variety of identities, including GitHub, Google, and Facebook, as shown in Figure. Cisco does not own or manage any of the enterprises that use these login methods. These organizations, on the other hand, have established a level of trust as well as a way for allowing other organizations to verify a user. GitHub, Google, and Facebook have all taken on the role of identity suppliers. They offer a solution that allows businesses like Cisco to identify and verify users. This solution gives Cisco clients more alternatives and eliminates the need for users to register a new user account and password. Federation identity management is related with a number of regularly used protocols, including

© Secbay Press

72 | Page

Secbay Press ●

SAML1.1



SAML2



OAuth2



OpenID Connect



WS-Trust



WS-Federation

2.1.4 Certificate management Consider the following scenario: you wish to go onto your bank's website and make a money transfer. You launch a web browser, enter in your bank's URL (or use a browser bookmark), and then log in. However, how can you know it's truly your bank? It's conceivable that your browser has been redirected to a site other than your banks. This redirection might have been initiated by a person or group attempting to steal your login credentials in order to get access to your bank account. You may have even witnessed this effort. Figure depicts a message that your web browser will display if it looks that the server, you're attempting to connect to be the right one.

Figure 14 Invalid Certificate Warning

This topic raises another question: how does your web browser know whether you're interacting with a possibly malicious server? The solution may be found in the certificate. Your browser understands that it must check the authenticity of the web server's certificate when you interact using the HTTPS protocol (notice that this does not apply to HTTP; the S must be in the URL). It accomplishes this by contacting a certificate authority (CA), which is a reputable third-party entity that can examine a web server's certificate and confirm that it is the proper web server. You may find a list of

© Secbay Press

73 | Page

Secbay Press the CAs that your browser utilizes in the settings of your browser, as shown in Figure.

2.1.5 Multifactor authentication (MFA)

MFA is a method of authenticating a user that needs the user's identity to be verified in several ways. A standard authentication approach, for example, would be for the user to supply a username and password. With MFA, the user would additionally need to produce another piece of evidence proving their identity. In MFA, the factor is the additional object that must be used to authenticate the user. This variable can fall into one of three categories: Something the user possesses: This might be a tangible device with a unique token, such as a bank card, a physical key, or a USB stick. Something only the user should know: This might be a different password, a PIN, or any other piece of information. The following is something that the user is: This can include a fingerprint, voice print, iris (eye), palm, and other biometric scans.

2.1.5 Single sign-on (SSO) SSO is a mechanism in which, after a user has signed in to an environment, he or she will not be required to give account credentials to access additional components until a timeout period has passed. For example, you may log in to your company's cloud infrastructure and then use various cloud parts (database services, virtual machines, and so on) without having to provide your account details. Further access to this environment would need passing through the authentication process again after a period of time for example, 12 hours. SSO's main benefit is that it allows users to operate in an environment without having to re-enter their © Secbay Press

74 | Page

Secbay Press login credentials across numerous internal sites or services. The difficulty of administering the environment and the reliance on users to physically safeguard the device used to enter the environment are also disadvantages of SSO. For example, if a user registers into an SSO-enabled environment and subsequently loses control of the device (for example, if the user's mobile device is stolen), the attacker may get unauthorized access to the environment for a period. Security assertion markup language (SAML) SSO is enabled through SAML, a technology. It is a standard that allows authentication and authorisation data to be sent between systems. The Cloud+ certification test does not cover all aspects of SAML, but you should be aware of its importance in allowing SSO inside an infrastructure.

2.1.6 Public key infrastructure (PKI) Digital certificates, as discussed previously in this chapter in the "Certificate Management" section, are used to authenticate the identification of servers (primarily web servers, but other things can use certificates, such as VPN connections). In the context of HTTP services and operations, these certificates are also known as SSL/TLS certificates, and they are based on the public key infrastructure (PKI). PKI is a standard that governs the creation, revocation, management, storage, usage, and distribution of digital certificates. It makes use of a pair of cryptographic keys (public and private), which not only enable the generation and verification of certificates, but also enable the encryption of data sent between the client and the server.

Figure 16 Public key infrastructure (PKI)

2.1.8 Secret management While a little theatrical, this quotation from hundreds of years ago emphasizes how important it is to preserve a secret. It's the same with PKI's private key (albeit no one has to die to keep this secret!). © Secbay Press

75 | Page

Secbay Press Secret management simply entails having a secure method of safeguarding your secret keys. This may entail storing your private keys in password-protected files or directories, but in a high-security cloud environment, it will very certainly entail utilizing a program that keeps this information. Most cloud providers offer secure storage of private keys. AWS Secrets Manager, for example, is available from AWS, Google Cloud Key Management is available from Google, and Azure provides Key Vault. Some third-party companies, such as HashiCorp's Vault and PKI Secrets Engine, will also allow you to store and manage your private keys.

2.2 Given a scenario, secure a network in a cloud environment. 2.2.1 Network segmentation The goal of network segmentation is to break down a large network into smaller ones. The main objectives of network segmentation are to boost performance and security. They're crucial in a public cloud environment since your company will frequently share the underlying network with other businesses. The cloud provider wants to be sure that no one else can see your network activity. Virtual LAN (VLAN)/Virtual extensible LAN (VXLAN)/ Generic network virtualization encapsulation (GENEVE) Three common network segmentation strategies are VLAN, VxLAN, and GENEVE. Each of these issues has the potential to be fairly huge and complicated. The most essential thing to remember for the CompTIA Cloud+ test is that they are utilized for network segmentation and that you grasp some of the distinctions. VLAN is a long-standing technology that dates back to the 1990s. Applying tags to network frames is a technique for segmenting communication on a network. The tags are used by networking devices to select which endpoints to transmit broadcast messages to. The tag is applied by VLANs to the layer 2 packet. Per network, there is a limit of 4000 VLANs. VxLAN was created to address concerns with scalability in large-scale deployments (particularly cloud environments and ISPs). VxLANs are similar to VLANs, but there are several changes, such as where the tag is applied (it utilizes a bigger field in the frame), resulting in more network segments being available (up to 16 million). Although VxLAN technology provides greater flexibility and scalability than VLAN technology, one drawback is that it isn't the sole VLAN alternative. Other alternatives, such as Network Virtualization with Generic Routing Encapsulation (NVGRE) and Stateless Transport Tunneling, have been given by various network technology manufacturers (STT). These alternative technologies have features that are comparable to VxLAN, however they are incompatible with each other. GENEVE is a more recent network segmentation solution that is meant to interact with VxLAN, NVGRE, and STT characteristics.

© Secbay Press

76 | Page

Secbay Press Micro-segmentation Microsegmentation is a security feature that allows administrators to conceptually separate virtual and physical resources into groups, with each group having its own set of security rules. Microsegmentation, in terms of firewalls, is the process of generating zones inside the same VLAN, for example. You may establish firewall rule sets that apply to a specified logical area of the network using a firewall zone. Zones provide the firewall administrator greater freedom, but they may also make troubleshooting problems more difficult if you don't know how the zones are set or which resources belong to which zones. Tiering Tiering is a network segmentation approach that takes into account the function of the network's resources. One frequent tiering segmentation strategy, for example, is to divide resources into three categories: web, application, and database.

2.2.2 Protocols There are a number of protocols that can help you protect your network or that you should consider when adopting a more secure practice or approach. To establish a more secure network, you can utilize encryption techniques such as IPsec, TLS, or HTTPS. However, because technologies like DNS and NTP are not always safe, you should consider adopting alternatives or adding extra security measures. These procedures are the topic of this section. Domain name service (DNS) DNS over HTTPS (DoH)/ DNS over TLS (DoT) In most cases, DNS data is delivered across the network in unencrypted format. This approach offers a security concern since the user's privacy may be jeopardized. It's also more susceptible to a man-in-themiddle attack, in which a rogue DNS server changes the DNS server's results. DNS over HTTPS (DoH) and DNS over TLS are two more secure options (DoT). Both approaches encrypt network traffic in the end. DNS security (DNSSEC) Another security concern might be found in the DNS system. When a query is made, it is possible for a phony DNS server to return inaccurate results. DNS cache poisoning or DNS spoofing is the term for this. The danger here is that a sensitive system's domain name to IP address translation (such as a bank's website) might redirect to a rogue server that steals users and passwords. It is possible to reduce the risk of DNS cache poisoning by using transaction signatures (TSIGs). To confirm that DNS data is coming from the proper source, TSIGs employ private and public digital signatures. Zone transfers and DNS queries can both benefit from this approach. Domain Name System Security Extensions are the most frequent approach to implement TSIGs for DNS (DNSSEC).

© Secbay Press

77 | Page

Secbay Press Network time protocol (NTP) Network time security (NTS) When you realize how vital correct time is, you can see why having accurate clocks on resources is so important. NTP is an excellent way to offer a reliable clock, but it wasn't designed to be extremely secure, and there are fears that a rogue NTP server may give out incorrect timings to your NTP client resources. NTS adds a layer on top of NTP to make it more secure. One of the most important features of NTS is the key exchange mechanism, which ensures that NTP clients connect to the right NTP server.

Encryption IPSec IP Security is a protocol that allows for safe data transfer between network nodes. IPsec is frequently used to establish a secure VPN link between two nodes in separate physical networks. It creates a connection between the two nodes and transfers all data in an encrypted manner to ensure data security. VPNs from a single node to a distant network, as well as network-to-network VPNs, may be created using IPsec. IPsec is implemented at the OSI model's Layer 3 (network layer). Transport layer security (TLS) Transport Layer Security (TLS) is a cryptographic mechanism for securing data transmission and system authentication. TLS (Secure Sockets Layer) was created to replace SSL (Secure Sockets Layer), and it is commonly referred to as SSL. TLS is used in connection with a variety of protocols, including VoIP, email, and instant messaging. It's also often used to provide a more secure communication protocol than HTTP between a web client and a web server (the HTTPS protocol). TLS has two main purposes: it prevents eavesdropping and tampering. After establishing a connection with an asymmetric cipher, data delivered over TLS is encrypted with a symmetric cipher. Transport Layer Security (TLS) is a cryptographic mechanism for securing data transmission and system authentication. TLS (Secure Sockets Layer) was created to replace SSL (Secure Sockets Layer), and it is commonly referred to as SSL. TLS is used in connection with a variety of protocols, including VoIP, email, and instant messaging. It's also often used to provide a more secure communication protocol than HTTP between a web client and a web server (the HTTPS protocol). TLS has two main purposes: it prevents eavesdropping and tampering. After establishing a connection with an asymmetric cipher, data delivered over TLS is encrypted with a symmetric cipher. © Secbay Press

78 | Page

Secbay Press Hypertext transfer protocol secure (HTTPS) Tunneling The technique of safely transmitting data over a network via an encrypted connection is known as tunneling. It's frequently used to set up VPN connections over numerous networks, such as the Internet. This section covers the protocols Secure Shell (SSH), Layer 2 Tunneling Protocol (L2TP), Point-to-Point Tunneling Protocol (PPTP), and generic routing encapsulation, which are all often used for tunneling (GRE). Secure Shell (SSH) The Secure Shell is a network protocol for sending data securely over a network. SSH performs a variety of tasks, including ●

The ability to use a command-line interface to log into a distant system.



The ability to run a command-line application on a distant machine remotely.



The capacity to transmit files between systems using a command-line program or an FTP-like program.

Layer 2 tunneling protocol (L2TP)/ Point-to-point tunneling protocol (PPTP) Tunneling is performed at Layer 2 of the OSI model using the Layer 2 Tunneling Protocol. The data connection layer is also known as OSI Layer 2. L2TP is a powerful encryption protocol that employs 256bit encryption keys. L2TP is a higher-level protocol, whereas Point-to-Point Tunneling Protocol is a lower-level protocol that is quicker and easier to use. PPTP, on the other hand, encrypts using 128-bit encryption keys, making it less safe than L2TP. Generic routing encapsulation (GRE) Cisco created a system known as generic routing encapsulation, which is frequently used to construct encrypted VPN connections across the network. GRE employs the approach of encapsulating a nonencrypted data packet inside a GRE packet (which does use encryption). The encapsulated packet is decrypted and handled by the destination system once the GRE packet arrives at its destination.

2.2.3 Network services Firewalls Stateful: A Stateful firewall maintains track of the network packets that are permitted to travel through it and enables answers to pass back through it. Let's say your company has a firewall that prevents most traffic from entering the network, but you have an application that connects to a web server on the Internet. The web server would normally be prevented by the firewall if it attempted to transfer network packets straight into your organization's network. If the firewall is Stateful, however, packets from the web server in response to internal application queries will be allowed. To put it another way, it creates a © Secbay Press

79 | Page

Secbay Press pinhole connection. Stateless: Any connections or communications made from within the company are ignored by a stateless firewall. For communication to enter the network, a stateless firewall must manually set all ports to be "unblocked."

Web Application Firewall (WAF) The majority of firewall software applications are meant to safeguard a network or an operating system as a whole. Although there are times when you'll want to use that form of firewall in your cloud environment, there is a more specialized circumstance when you'll want to use a different type of firewall: a web application firewall (WAF). A WAF is a security appliance that protects HTTP/HTTPS-based OSI layer 7 applications. This is significant in cloud contexts because with a cloud architecture, applications may execute independently of the operating system. By analyzing HTTP traffic, a WAF is meant to filter and monitor inbound connections to apps. It protects your apps from SQL injections and cross-site scripting threats (known as XSS). Application delivery controller (ADC) An application delivery controller is a software that performs a variety of tasks. ● ● ●

As a web accelerator, it lessens the burden on web servers and improves HTTP response time. Serving as a basic web server load balancer Because web servers are typically positioned in the DMZ, providing restricted access to them is a must (between the inner and outer firewall)

Intrusion protection system (IPS)/ Intrusion detection system (IDS) Data loss prevention (DLP) Consider how some retailers try to prevent inventory theft by utilizing security control tags. A tag is attached to an object and is only deactivated when it is sold. An alert will sound if the item isn't sold and someone tries to take it from the store. To put it another way, data loss prevention protects against data exfiltration. DLP software is based on a similar idea. This program continually analyzes network traffic and attempts to detect sensitive data flow. If the DLP detects the data, it should prevent it from leaving the network. Network access control (NAC) Consider a scenario in which your company has deployed a number of software tools to protect its network and resources. This includes antivirus software, threat detection software, device management software, data loss prevention software, and so on. Despite the fact that they are all part of the endpoint © Secbay Press

80 | Page

Secbay Press security aim, each software is designed to accomplish its purpose separately. Consider network access control as a technique to bring these disparate applications together under a single administrative umbrella. NAC uses policies to regulate numerous parts of a network in order to increase network security while also making network security more apparent and transparent to the NAC administrator. Packet brokers You may have utilized a broker in the real world to help you with a complicated procedure. If you wanted to receive a loan to buy a house, for example, you might go to a private lender, fill out a loan application, and supply all of your financial information, among other things. Then, in order to ensure that you obtain the greatest offer, you'd repeat the process with other lenders. In real life, a broker is an agent who acts as a single point of contact for you rather than having to deal with several parties separately. Another example is a stock market broker who will assist you in purchasing and selling stocks (and hopefully provide you with sound advice on buying and selling stock).

2.2.4 Network flows As you can obviously imagine, the network of your company may soon become incredibly complicated. Understanding the flow of your network traffic becomes crucial to ensuring your security policies are enforced appropriately when you have numerous security devices, VPNs, and other security features. The way packets are routed via your network is described by a network flow. To better see the flow of network traffic in your cloud environment, you may use network diagram software to create a network map. The transit of network packets from your on-premises system to your cloud environment should also be considered (and vice versa).

2.2.5 Hardening and configuration changes Hardening is a term used to describe the process of increasing security. This section focuses on how you may increase the security of your cloud network by implementing policies and processes. Disabling unnecessary ports and services When you deploy a resource in a cloud environment, be sure that just the services that are required are functioning. Assume you installed a Linux server in your cloud architecture using a "out of the box" installation method. Many services, including the print service, will most likely operate on the server that aren't required. Unnecessary services might put your resource at risk of being hacked. Analyze which services are running on the resource and deactivate those that aren't required to accomplish the resource's function. Disabling weak protocols and ciphers In your cloud environment, you should have a list of protocols and ciphers that your firm supports and authorizes. Anything that isn't on this list should be kept out of your cloud architecture. Because there are so many protocols and ciphers, it's impossible to provide a comprehensive list that would meet your organization's demands. The following list gives you an overview of some of the most often used weak © Secbay Press

81 | Page

Secbay Press protocols and ciphers: ● ● ● ● ● ● ● ● ● ●

FTP Telnet POP3 IMAP SNMP v1 and v2 RC2 RC4 MD5 DES DESa Google Chrome Browser

Firmware upgrades Firmware software is the software that controls and manages a system's actual hardware units. This software is often put on dedicated nonvolatile memory devices, such as ROM, rather than on the resource's file system. This software is frequently disregarded in security policies since it just "works" as is, and upgrading it seldom adds new vital functionality. Firmware upgrades, on the other hand, frequently address security problems, such as closing security flaws. As a result, a security strategy should incorporate firmware updating on a regular basis. Control ingress and egress traffic Firewalls, packet brokers, and data loss prevention (DLP) are examples of instruments that govern traffic flow. These tools are intended to manage traffic entering your network (ingress traffic) as well as traffic exiting your network (exit traffic) (egress traffic). Whitelisting or blacklisting Whitelisting is the process of making a list of servers, websites, or other resources to which you want to provide access. Blacklisting is the process of compiling a list of servers, websites, or resources to which you want to restrict access. To regulate the entry and egress flow of network traffic, these lists are frequently used in conjunction with software like as firewalls or DLP software. Proxy servers A proxy server is a technology that allows a client and a server to communicate more easily. Depending on the proxy server's architecture, there are various benefits. Proxy servers come in a variety of shapes and sizes, including the following: Tunneling proxy: This form of proxy is used to connect two networks together. When an IPv4-based network wants to connect with an IPv6-based network, for example. A graphic illustration of a tunneling proxy may be found in Figure.

© Secbay Press

82 | Page

Secbay Press

Forward proxy: A forward proxy is one that is meant to function on the client side of a communication. Instead of talking directly with the web server, a web browser might point to a proxy server. When someone refers to a system as a proxy server, they are usually referring to one that acts as a forward proxy. A graphic illustration of a forward proxy may be found in Figure.

A reverse proxy server is one that has been setup on the server side. Instead of having web clients connect directly to your company's web server, you may have them connect to a proxy server first. For the needed data, the proxy server talks with the web server. A graphic illustration of a reverse proxy may be seen in Figure.

Distributed denial of service (DDoS) protection A DDoS assault is a cyber-attack on your network or a network resource. A high quantity of network packets are transmitted via numerous systems with the objective of overloading systems inside your network. DDoS assaults can take many different forms, including ●

HTTP floods



DNS query floods



SSL abuse



SYN floods



UDP reflection floods

© Secbay Press

83 | Page

Secbay Press

2.3 Given a scenario, apply the appropriate OS and application security controls. 2.3.1 Policies A policy is a set of regulations that are intended to give an increased level of security. Components of the cloud infrastructure, such as user accounts and software, are more susceptible without controls. This section discusses several policies that you should think about putting in place in your cloud environment. Password complexity A policy is a set of regulations that are intended to give an increased level of security. Components of the cloud infrastructure, such as user accounts and software, are more susceptible without controls. This section discusses several policies that you should think about putting in place in your cloud environment.

Figure 19 AWS Password Policy

Account lockout An account lockout policy specifies when a user's account will be locked. Any of the following can be included: ●

An expiration date



Too many attempts to log in to the account



The number of days that the account is inactive



Login attempts from unauthorized locations

© Secbay Press

84 | Page

Secbay Press Application whitelisting When deploying computing resources in the cloud (or on-premises), you should think about implementing an application whitelisting policy. You select which programs can be installed and operated on that virtual machine or container when you create an application whitelist. This policy's objective is to keep malware from being installed or executed. Malware can be installed either knowingly by the user or unknowingly by clicking on a link to a site that downloads and installs the malware. Software feature The majority of software programs have elements that might be deemed a security concern. An email program that opens attachments, for example, may open a file that contains a virus or worm. To offer a greater degree of security, a software feature policy is used to define which functionality should be disabled. User/group User accounts from the cloud and the operating system can be integrated into a single group. This is usually done to provide users access to resources or to apply a policy to a group of user accounts. Consider the following scenario: you want three user accounts, ned, fred, and ted, to be able to view data from a database. You might give each user access to the database based on their permissions. For a small number of user accounts, this strategy may be feasible, but what happens when there are hundreds or thousands of user accounts? What happens when accounts are deactivated and re-added to the environment on a regular basis?

2.3.2 User permissions A user must have the appropriate permissions for cloud resources in order to access them. These rights can be granted to individual resources or kinds of cloud resources in various circumstances. You could, for example, build a permission that enables a user to read data from any database while also allowing that person to alter data from just certain databases. In terms of operating systems, permissions are used to secure files in both Linux and Microsoft Windows, which are the two most common operating systems used in cloud computing. For example, on Microsoft Windows, you may access a file's properties by right-clicking it and selecting Properties. Then, as shown in Figure, you go to the Security tab and change the file's rights.

© Secbay Press

85 | Page

Secbay Press

Figure 20 Changing the Permissions of a File in Windows

Every file and directory in Linux has standard permissions (also known as read, write, and execute rights) that enable or deny access to a user. Use the ls -l command in Linux to see the permissions of a file or directory:

[student@localhost ~]$ ls -l /etc/chrony.keys -rw-r-----. 1 root chrony 62 May 9 2021 /etc/chrony.keys The file type (if the first letter is a hyphen [–], it represents a plain file, and d denotes a directory) and permissions for the file are indicated by the first 10 characters of the output. The file's owner (root in the above example), The group owner (chrony) and all remaining users (referred to as "others") possess distinct permissions. The file type (if the first letter is a hyphen [–], it represents a plain file, and d denotes a directory) and permissions for the file are indicated by the first 10 characters of the output. The file's owner (root in the above example), the group owner (chrony), and all other users (referred to as "others") have different permissions. Each set can have three permissions: read (r), write (w), and execute (x) (x). The character that represents the permission is shown if it is set. Otherwise, a hyphen (–) character appears to imply that permission has not been granted. As a result, r-x denotes "read and execute are enabled but write is not." Whether the item is a file or a directory determines what read, write, and execute rights truly imply. These permissions apply to files and are as follows: Read: You can look at or copy the contents of a file.

© Secbay Press

86 | Page

Secbay Press Write: Can change the contents of a file. Execute: This option allows you to run the file as a program. Before you can execute your software, you must first make it executable. The chmod command is used to modify file permissions. It may be applied in two ways: symbolically and octonically. Permissions are allocated numeric values using the octal method: ● ● ●

Read = 4 Write = 2 Execute = 1

One number may be used to express a whole permission set using these numeric values: ● ● ● ● ● ● ● ●

7 = rwx 6 = rw5 = r-x 4 = r-3 = -wx 2 = -w1 = --x 0 = ---

To modify a file's permissions to rwxr-xr—, run the following command:

chmod 754 filename When using octal permissions, always offer three numbers, which will update all rights. But what if you simply want to update one of the set's permissions? As demonstrated in Table, you utilize the symbolic way by giving three values to the chmod command. Who

What

Permission

u = user owner

+

r

g = group owner

-

w

o = other

=

x

a= all sets Figure 21 Symbolic Permission Values

2.3.3 Antivirus/anti-malware/endpoint detection and response (EDR) Antivirus/antimalware software is available for almost any computing system, including traditional operating systems like Microsoft Windows and Linux, as well as mobile device operating systems like © Secbay Press

87 | Page

Secbay Press iOS (Apple's mobile OS) and Android. They've become standard, and they're especially crucial for cloud resources, which are often accessed over the Internet. Many businesses employ EDR software in addition to putting these items on individual devices. This program collects data from endpoints (individual systems) and stores it in a central database. This data is used to identify whether or not there are any network concerns. Organizations may gain a better knowledge of the dangers that are reaching their network's endpoints by studying this information.

2.3.4 Host-based IDS (HIDS)/ Host-based IPS (HIPS) An intrusion detection system (IDS) is a software or hardware device that detects whether or not an incursion is taking place or has taken place on a network or a host. IDS software is divided into two types: network-based and host-based. A network-based intrusion detection system (NIDS) is software that examines network traffic to see if an incursion is occurring or has occurred. A host-based IDS (HIDS) is a type of intrusion detection system that monitors the status of a system to see if an intrusion is taking place or has already occurred. These software packages come in a variety of flavors, but the basic idea is that a HIDS will use a database to explain what to watch on the system. This can involve tracking activities taken by other software programs, actions taken by users, changes to the operating system's configuration, and changes to file systems. It's vital to remember that an IDS is meant to keep an eye on the system or network for intrusions, not to protect the system against them. An IDS may do some tasks, such as creating reports or sending alarms through email or SMS, but it does not safeguard the system or network directly. It can send out notifications so that a manual action can be performed. An intrusion prevention system (IPS) keeps an eye out for invasions and can intervene if one is identified. An HIPS, for example, may identify a suspect login and ban access to the system from the source IP address.

Figure 22 Host-based IPS (HIPS)

© Secbay Press

88 | Page

Secbay Press An IPS has an advantage over an IDS in that possible threats can be mitigated faster than if a person is engaged. The drawback of an IPS over an IDS is that false positives might result in access being denied to someone who should have it.

2.3.5 Hardened baselines Each resource should be built with certain features and functionalities in mind. For instance, you may have a virtual computer set up to function as a web server. Unfortunately, when you execute a "out of the box" installation on many operating systems, extra functions are activated. For each system you deploy, a hardened baseline is a set of requirements. Establishing guidelines for how different components of the system are configured, such as the following, might be part of this baseline: ●

Operating system configurations



Network configurations and services permitted on the network



System monitoring configurations



Data encryption methods



Application configurations



Security appliance configurations



Backup policies and procedures



Patching and update policies

Hardened baselines are designed to create explicit standards and processes for ensuring the security of every resource deployed by your business. Long before your business deploys any resources in a production environment, these baselines should be fully defined.

Single function Each resource should be designed to execute a specific purpose, as indicated in the preceding section. For example, a virtual machine that serves as a web server for an organization should not also operate as a mail server, NTP server, or file server. One of the main benefits of cloud computing is the separation of functions, which is one of the main reasons to use containers rather than entire operating systems. You may tailor the system resources (CPU, RAM, and so on) in a cloud architecture to fit the needs of the computing resource you're installing. You won't have to predict how much system resources your web server will use, and you won't feel obligated to "overburden" a costly server with extra services. You may create a more secure environment by employing a single function technique. For example, an attacker may be able to break into your web server (which is horrible), but if you utilize a single function technique, the attacker will not be able to access your mail server, NTP server, or file server.

© Secbay Press

89 | Page

Secbay Press

2.3.6 Builds A build is a release that occurs when an organization builds software and distributes it, either internally or externally. This section delves into the many sorts of constructions. Stable A stable build is intended to be a release that can be used in a production setting. If you are a consumer who purchases software, it is usually considered a stable build. Early versions of software are known as beta builds and are issued before the final version is published. Some companies like to get access to beta releases because it allows them to get a head start on how the product will operate. Beta builds, on the other hand, come with little to no support or warranty. They should not be utilized in production situations and should be used at your own risk. Long-term support (LTS) A long-term support build is one that is stable and should be sustained for a longer amount of time than usual. For certain businesses, this is a critical problem since upgrading to a new software version can provide a number of issues, including: ● ● ● ● ●

To guarantee that the new version meets requirements, it will take time, effort, and money. New licensing fees might be incurred. The requirement to address such discrepancies or incompatibilities. A newer version of software, for example, may not be compatible with existing software in the business. Additional training expenditures will be incurred to teach existing staff and customers how to use the new software version. Employees are attempting to adopt the latest version of the program, which has resulted in decreased output.

Canary You may have heard of workers bringing a canary bird into the mines to see if the air contained harmful amounts of poisonous chemicals. The theory was that because the canary has a higher respiratory rate than humans, it would detect harmful chemicals sooner than people.

2.3.7 Encryption Application programming interface (API) endpoint An API is a strategy for providing a well-known means of communication between a client and a server. A client will send an API request to the server, which will most likely use representational state transfer. API calls are prevalent over the internet. Even if the API request is done within a secure network, the transmission itself should be encrypted. As a result, the HTTPS protocol is typically used to establish the endpoint of API interactions.

© Secbay Press

90 | Page

Secbay Press Application Data is frequently stored and sent by applications. All data in transit and at rest should be encrypted to protect the security of this information. This is especially essential in cloud contexts since data stored in a network-accessible place is typically more accessible. OS When it comes to OS encryption, it's usually the storage disk or filesystem where the OS is stored that's being encrypted. Storage Although most cloud-based storage systems offer some form of encryption, it may not be activated by default. Encryption should be enabled on any cloud storage that is used to store sensitive data. Filesystem When data is "at rest," filesystem encryption refers to any filesystem in which the file data (and, in certain cases, metadata) is encrypted. When a user or software need the data, it is decrypted. Encryption is sometimes offered as a feature of a filesystem that may be switched on or off. In other circumstances, the encryption is handled by a different program. Using an encrypted filesystem is an important aspect of a security strategy in situations like the cloud, where data on the filesystem may be viewed by several persons.

2.3.8 Mandatory access control Consider a scenario in which a person registers into an operating system and then wishes to share a file with another user so that the latter may read its contents. This is usually performed through file permissions, and if the user owns the file, he or she should be able to alter the file permissions. Discretionary access control is the term for this (DAC). The issue with DAC is that when it comes to security, users don't always make the greatest decisions. Let's say the user couldn't figure out how to adjust the file's permissions so that the other user could access it. Instead, the initial user decides to let everyone to access the contents of the file, perhaps out of annoyance or a lack of time to discover how to execute this operation. This can be a very bad security decision depending on the contents of the file.

© Secbay Press

91 | Page

Secbay Press

Figure 23 Mandatory access control

A mandatory access control (MAC) system is a collection of rules that provide or restrict access to a resource to entities (users, software programs, and so on). Individual users do not have the power to alter the access restrictions in MAC, unlike DAC. The access rules are managed by a MAC policy, and the administrator should be the only one who can change it. On the operating system front, MAC systems are very common. Mandatory Integrity Control is used by Microsoft to deliver MAC on Windows. SELinux and AppArmor are prominent MAC systems on Linux platforms. MAC is most likely used by your cloud provider as well. As the cloud administrator for your company, for example, you may provide users access to resources through cloud permissions or rules.

2.3.9 Software firewall Firewall software can be used to secure a network or a single resource. The firewall may filter which network packets can enter a network or the operating system when firewall rules are in place. There are several firewall applications to choose from. On the CompTIA Cloud+ test, you shouldn't anticipate being quizzed on any specific firewall software. Firewalls, on the other hand, offer a variety of functions and benefits, including the following: ●

Any header information, such as the source IP address or destination port, can be used by a firewall to deny or accept packets.



A firewall can sometimes reject or accept packets based on the message's content.



Advanced functionality such as Network Address Translation (NAT), virus screening, intrusion prevention, and VPNs may typically be included in firewall software.

© Secbay Press

92 | Page

Secbay Press ●

Network packet information can be logged by firewalls.



Firewalls are not one-way devices. A firewall can be used to prevent internal resources from attempting to access an external resource.

2.4 Given a scenario, apply data security and compliance controls in cloud environments. 2.4.1 Encryption •

Symmetric- AES, DES, 3DES, RC4, Blowfish



Asymmetric – RSA, DSA, ECC, DH, DHE



Hashing – MD5, SHA1, SHA2, SHA3, RIPEMD

When interconnecting networks, especially over a public network such as the Internet, it is mandatory that you encrypt all traffic from the source to the destination. Encryption can be a complex topic, as you have learned throughout this book. Use the most current encryption types and transforms such as AES256 to protect your data. Cloud providers commonly offer encrypted VPN connections from your facility to your cloud fleet of services. Use their b est practices and follow the guidelines that they publish online. You will need this information when configuring and testing a VPN connection. Sometimes the terminology can be confusing. It is mandatory that you read the manual to make this work.

Symmetric Encryption This method employs the use of the same key (some sort of unique value) to both encrypt and decrypt the data. •

AKA Secret Key Cryptography



Same Key for Encryption and Decryption



Faster more efficient



Problem of how to transmit the key securely

Asymmetric Encryption This method encrypts and decrypts data using a different key. One key is known as the public key, and the other as the private key. This encryption method could be used if you needed someone to send you data over the network. You give this person the public key, and he or she encrypts the data. The private key, which you would never share with anyone else, is the only way to decrypt the data. •

AKA Public Key cryptography

© Secbay Press

93 | Page

Secbay Press •

Security Based on the discrete log problem



Public Key everyone has



Private key Stays private

Least Privilege Another important security control is the principle of least privilege. Employees should be granted only the minimum permissions necessary to do their job. No more, no less. Incorporating the principle of least privilege limits potential misuse and risk of accidental mishandling or viewing of sensitive information by unauthorized people Rights identify what a user can do, read, write, access, change system time, reboot system Linux Permissions ▪ ▪ ▪ ▪ ▪ ▪ ▪ ▪

Owner- who owns the file Group – groupadd command Others-everyone else Read (r) – 100 (4) Write (w) - 010 (2) Execute (x) - 001 (1) Example: chmod 644 = rw-r—r— Chown- change owner of file

Windows Permissions ▪ ▪ ▪ ▪

Read Read and Execute Write Modify

2.4.2 Integrity While data encryption is concerned with preventing unauthorized individuals from viewing the original data, data integrity is concerned with ensuring the data is accurate and consistent. To do so, data integrity must be maintained at all stages of the data lifecycle, which includes transporting, storing, retrieving, and processing data. Several tools, such as hashing algorithms, digital signatures, and file integrity monitoring, can be used to ensure data integrity (FIM) Hashing algorithms •

Hashing Algorithms verify that data has not been changed in transit



If a hash matches then that means the file is the same



If it changes that means the file has been modified

© Secbay Press

94 | Page

Secbay Press Antivirus Host-based firewalls can be configured based on policy so that many machines can use the same configuration.covered how to automate configuring firewalls with scripting. You can also use Windows group policies to configure the Windows Defender firewall that comes with Windows. Many antivirus vendors bundle a host-based firewall with their products, and these can be managed with a central management application or cloud portal if the licensing for that application or portal has been purchased. Antivirus or antimalware looks at actions on a system to identify malicious activity. Antivirus or antimalware does this through the same two methods used by an IDS/IPS, signatures and heuristics. The terms antivirus and antimalware are used interchangeably. Both antivirus and antimalware software detect viruses, trojans, bots, worms, and malicious cookies. Some antivirus or antimalware software also identify adware, spyware, and potentially unwanted applications. Signatures are descriptions of what malicious actions looks like. Antivirus and antimalware review the data in memory and scan data on the disk or disks that are plugged into them and take action if they find data that matches a signature. The second screening method used is heuristics, which looks for patterns that appear malicious, such as a user mode process trying to access kernel mode memory addresses. Just as with the IDS and IPS, antivirus or antimalware heuristics operates off an understanding of what constitutes normal on the device. A baseline is configured and periodically updated, so the antivirus or antimalware understands what to expect from the network traffic. Anything else is an nomaly, and the device will take action. Cloud antivirus represents a programmatic solution that shifts antivirus operations to a cloud-based server, relieving the user's computer from the burden of running a complete antivirus suite. Unlike traditional security software that relies on the local processing power of a user's computer, cloud-based security solutions install a compact "client" program on the desktop, which then establishes a connection with the security provider's web service. At this service, data from antivirus scans undergoes analysis, and instructions for appropriate countermeasures are relayed back to the user's computer. The cloud antivirus market is experiencing growth as established security companies and emerging startups harness distributed computing technology to offer enhanced protection. Advantages: By leveraging cloud technology to process and interpret scan data, the user's computer only needs periodic scans of its file system, significantly reducing the processing power required for system security. Real-time data can be pushed to the desktop client, updating local blacklists (containing malicious files and sites) and whitelists (comprising approved files and sites), obviating the need for manual updates or relying on infrequent automatic updates. Cloud antivirus often proves to be a more cost-effective alternative to acquiring a comprehensive software suite. All common antivirus features, including virus scanning, scan scheduling, reporting, and file removal, are integral components of cloud-based antivirus offerings. The primary alteration is the processing location. Drawbacks: Potential drawbacks of this antivirus solution include its dependence on connectivity; if a provider's web service experiences downtime, end-point computers are left vulnerable since the local client can only perform scans, not interpret the results. Additionally, optimizing the solution is critical; vendors must determine which blacklisted and whitelisted definitions are essential for inclusion in the local client without overburdening it, and which can remain on the cloud server. Lastly, there are concerns about user data © Secbay Press

95 | Page

Secbay Press being uploaded to cloud servers, potentially posing a risk of secondary infection.Many antivirus and antimalware vendors have a central management application or cloud portal option that can be purchased. These tools or portals are very valuable for ease of administration. Each antivirus or antimalware client reports in to the portal, and administrators can view all machines in a set of dashboards. Dashboards show things like machines with outdated signatures, number of viruses detected, virus detection rates, virus types, items in quarantine, number of files scanned, and much more. These administration tools usually allow administrators to deploy antivirus or antimalware software to endpoints without walking to each machine. Some antivirus and antimalware tools come with other services bundled in. These include host-based firewalls, data loss prevention, password vaults, e-mail scanners, web scanners, The term "Endpoint Threat Detection and Response" (ETDR) was originally coined by Gartner's Anton Chuvakin in July 2013. It was created to describe "tools primarily designed to identify and investigate suspicious activities (as well as any traces thereof) and other issues on hosts or endpoints." Often referred to as "Endpoint Detection and Response" (EDR), this category of solutions is relatively recent and is occasionally compared to "Advanced Threat Protection" (ATP) regarding its comprehensive security capabilities. Endpoint detection and response (EDR) is an emerging technology designed to meet the demand for continuous surveillance and swift reaction to advanced threats. It can be contended that EDR serves as a variant of advanced threat protection. EDR tools operate by overseeing events on endpoints and networks, recording this data in a central database, and subsequently conducting in-depth analysis, detection, investigation, reporting, and alerting. This process relies on a software agent installed on the host system, serving as the core for event monitoring and reporting. Continuous monitoring and detection are facilitated through the utilization of analytical tools. These tools pinpoint tasks capable of enhancing a company's overall security posture by recognizing, responding to, and deflecting internal threats and external attacks. Endpoint detection and response tools do not all operate in identical manners or offer identical sets of capabilities. Some of these tools conduct extensive analysis within the agent, while others concentrate on the backend through a management console. Variations may also exist in terms of data collection timing, scope, or their capacity to integrate with threat intelligence providers. Nonetheless, all endpoint detection and response tools carry out the same fundamental functions with a shared objective: to establish a mechanism for continuous monitoring and analysis, enhancing the ability to promptly identify, detect, and thwart advanced threats. Digital signatures Assume you receive a letter from a friend. How would you know it was truly from that person? One option is to have your friend sign the letter at the bottom. If you recognize the signature, you can be more confident that it was made by your friend. Digital signatures are used in the same way, but their implementation is a little more complicated. Asymmetric cryptography is used in digital signatures, in which the signature is encrypted using an individual's or organization's private key. Another method is used to disseminate the public key. Only the public key can decrypt the signature that has been encrypted with the private key. Decryption success validates that the data came from the correct source. © Secbay Press

96 | Page

Secbay Press File integrity monitoring (FIM) In some cases, it is necessary to determine whether or not data within a file has changed. This determination is handled by a process known as file integrity monitoring. When a file is in a known state called a baseline, FIM generates a checksum. This checksum is a value based on the current contents as well as, in some cases, additional file attributes such as the file owner and permissions. You can take another checksum in the future to see if a file or a file attribute has been changed. When comparing the original and new checksums, if they match, the current file is the same as the original. This technique can be used to determine if a key operating system file or a file downloaded from a remote server has been tampered with.

2.4.3 Classification Consider how you would handle data containing credit card information versus how you would handle data containing comments made about your company's website. Data containing credit card information is much more sensitive than data containing customer comments, so you should treat the data differently. In this case, data classification is critical. Data classification is the process of categorizing data based on how you intend to use it. These categories can be based on rules governing how sensitive the data is, who should be able to read it, who should be able to modify it, and how long the data should be available. Unless you are storing data related to compliance regulations (such as SOC 2, GDPR, PCIDSS, or HIPAA), the data classification criteria are entirely up to you. Data Classifications •

Organize data into different tiers and categories to make data available as required



Meet regulatory requirements for risk mitigation and security



Secure data like PII and PHI



Metadata- data about data that explains, categorizes, or classifies the data



Needs to be clearly defined in company Security Policy

Public: This data is accessible to everyone, even those outside of your organization. This often comprises data from your company's press releases, material from your public website, and announcements made on social media platforms. Internal: Only members of your organization should have access to this information. An example of this data might be upcoming updates to a software product developed by your company. Confidential: This material should be accessible only to those individuals who have a need for it. Personal identifiable information (PII), such as an employee's Social Security number, could be included. Compliance regulations frequently regulate the rules for handling this data.

© Secbay Press

97 | Page

Secbay Press Restricted: Restricted data may appear to be comparable to confidential data, but it is usually more tied to proprietary information, company secrets, and, in some situations, material regarded as secret by the government.

2.4.4 Segmentation •

Cloud Segmentation-process of dividing cloud deployment into sections



Allows for more granular security policies



Cloud networks can be segmented into functions and security policies can be applied to individual functions



Cloud Network are segmented into IP subnets which allow functions to be in a subnet and policy applied to ingress and egress traffic

Cloud segmentation is the process of dividing your cloud deployment into sections to allow for granular security policies to be applied. By applying segmentation to your cloud deployment, you can implement a multilayered security approach that aligns with a defense-at-depth security model. Cloud networks can be segmented into functions, and then security policies can be applied between each segment. Segments can include the public-facing web servers and then an additional segment for the application layer and a third for the database. Other segments may include dedicated storage, backend management and services, and DMZ. By creating multiple segments, greater security and controls can be applied to the flow of data into and out of each segment. A strong security posture on the cloud network periphery is critical to defining who you allow into and out of your network, and it defines how your network defenses are configured. Security is applied using network access control lists, security groups, users, user groups, roles, and firewall configurations that are designed to meet your company's security requirements. These security policies protect, for example, data in storage systems, databases, and application servers from unauthorized access. Networks can also be segmented by creating more than one virtual private cloud and interconnecting the VPCs with security policies in place to define the permit/deny rules of the traffic flow between the VPCs. In this approach, each VPC can be dedicated to a specific function such as production workload or development and be isolated from the other processes running in your cloud deployment. Cloud networks are also segmented in IP subnets, which allow like functions to be placed in a subnet and a policy to be applied to traffic flow allowed inside and incoming and outgoing from each subnet. Also, subnets can use either public IP addresses that are reachable from the Internet or private IP ranges, which allow some degree of security since they are not reachable from the Internet without a NAT gateway. For the sake of completeness, we should mention protected subnets, which are the same as private subnets but are 100 percent isolated from the Internet since a protected subnet does not have access to a NAT gateway to access the Internet. Network segments should be spread across multiple cloud data centers for survivability and resiliency.

© Secbay Press

98 | Page

Secbay Press

2.4.5 Access control Access control or Computing Segmentation ●

Allows local connections and enhanced security



Best practice to place Compute, Virtual Machines into an isolated subnet and configure access control



Internet facing web servers can be segmented to allow common protocols like HTTP and HTTPS

Access control is a strategy for determining who has access to a resource. In terms of data access control, accessing the resource can entail viewing, altering, and destroying the data. In most cloud systems, the notion of "who" might include both people and other resources. For example, a payroll program may require secure data about employees housed in a database. There must be access control policies in place that allow or deny access to both persons and resources. To gain access to cloud resources, people are granted user accounts. Permissions are used to allow access to resources to certain user accounts. Applications are allocated to roles, which are analogous to user accounts in that permissions may be applied to roles in the same way that they can be given to user accounts.

2.4.6 Impact of laws and regulations Laws and Regulations •

Drives Compliance



Cloud Providers can provide compliance documentation but customer still has to meet same criteria



Plan compliance from the beginning of the project so your migration goes smoothly



Use reference guides when possible

Many laws and regulations govern how data is handled in a business. They will differ based on the location of your data. The regulations that control data in the United States, for example, differ from those that govern data in the European Union (EU). The laws and rules are various and vary depending on your organization's business. For example, if your organization is a retailer that accepts credit card payments, you will almost certainly need to adhere to PCI Security Standards while dealing with credit card data. If your company is a hospital, you must adhere to HIPAA standards when dealing with patient data. It is unlikely that memorizing a slew of laws and regulations will help you pass the certification exam. Many organizations have full-time staff dedicated to enforcing these restrictions. Understanding the implications of these regulations is crucial for the exam

© Secbay Press

99 | Page

Secbay Press Legal hold A legal hold is a situation wherein a business or organization makes changes to its method of records management in order to preserve information because of a pending litigation. In the digital age, it often involves the handling of business data across sophisticated IT architectures

2.4.6 Records management Organizations frequently generate, collect, and accumulate a large amount of data. Because storing data is not free, the volume of information maintained by an organization might result in hefty expenditures. While cloud vendors give various options for data storage, they charge for data storage, therefore enterprises often do not want to hold data for longer than necessary. The process of selecting how and for how long to keep data is known as records management. This broad issue encompasses data classification and encryption, as well as versioning, retention and disposal policies. Versioning This subject delineates the versioning methodologies employed by Google APIs, and, in essence, these approaches are applicable across all Google-managed services. On certain occasions, there arises a need to implement modifications to an API that are incompatible with previous versions, often referred to as "breaking" changes. Such alterations have the potential to create problems or disrupt code that relies on the initial functionality. To mitigate the impact of these breaking changes, Google APIs employ a versioning structure. Furthermore, Google APIs restrict access to particular functionalities based on their stability levels, including components designated as alpha and beta. Retention With a cloud data retention policy, you can define when to delete certain types of data. This process simplifies data privacy and security operations. Similarly, periodically purging cloud data can enhance the efficiency of managing cloud infrastructure and streamline governance tasks. A cloud provider may consider such data as proprietary or retain the prerogative to alter hardware platforms in the future. If an organization's definition of Write Once, Read Many (WORM) necessitates safeguarding data at a hardware level, even from cloud solution administrators, then the chosen cloud solution may not align with the organization's criteria.

© Secbay Press

100 | Page

Secbay Press

Destruction When designing a records management plan, the deletion of data must be explicitly outlined. When the data is to be deleted is one critical element to establish, but the plan should also clearly state how the data is to be removed. Physical destruction of records, degaussing, or zeroizing can all be used to erase data. Write once read many Write once read many (WORM) is a type of write protection in which data can be written only once and then cannot be updated. This is an important function to have when you need to confirm that data has not been tampered with after it has been created.

© Secbay Press

101 | Page

Secbay Press

2.4.8 Data loss prevention (DLP) •

DLP- Data Loss Prevention



Removable media- DLP protects unauthorized use of removable media



Data exfiltration- unauthorized transfer of data outside an organization



Cloud Based DLP- can protect (PII)/(PHI)

Cloud-based data loss prevention (DLP) plays a pivotal role in safeguarding an organization's sensitive or vital information against cyber threats, insider breaches, and inadvertent exposure. These solutions offer visibility and protection for sensitive data in both Software as a Service (SaaS) and Infrastructure as a Service (IaaS) applications. Challenges and Risks in Cloud DLP: As businesses increasingly rely on cloud computing for their operations, the realm of cybersecurity becomes more complex, giving rise to a plethora of potential risks. Addressing these emerging threats necessitates a heightened reliance on cloud DLP technology. Cloud DLP solutions leverage predefined business rules to classify and safeguard confidential and critical information. They deploy requisite security measures to prevent users and compromised accounts from unintentionally or maliciously sharing data, thereby shielding organizations from potential harm. Driving Forces for Cloud DLP Adoption: Organizations are embracing cloud DLP in response to evolving threats that exploit social engineering techniques and stringent data privacy regulations mandating robust data protection and controlled data access. Furthermore, the expanding pool of current and former employees with access to sensitive or valuable data raises concerns about potential misuse. According to Ponemon, the incidence of insider-caused cybersecurity breaches has surged by an alarming 47% since 2018. The associated costs have also skyrocketed, registering a 31% increase to $11.45 million in just two years. Consequently, vigilance regarding both internal and external threats have never been more critical. Cloud DLP Threat Landscape: As organizations grow, so does their susceptibility to cyberattacks. With data breaches making headlines more frequently, organizations are compelled to reevaluate the security of their cloud data, the efficacy of their protective policies, and the strategies and tools at their disposal for mitigating breaches. The repercussions of a cyberattack extend beyond financial losses. Organizations must also mitigate the risk of sensitive or inappropriate materials leaving their premises. This includes proprietary content, offensive materials, data that doesn't conform to government regulations or internal policies, and confidential information sent to unintended recipients. Best Practices for Cloud Protection: To implement cloud data loss prevention effectively, consider assessing all security aspects within your organization, classifying the types of data you handle, and evaluating the associated risks of data © Secbay Press

102 | Page

Secbay Press compromise. Conduct employee training to reduce the risk of inadvertent data loss by insiders. Implement people-centric controls for safeguarding sensitive data. Recognize that cloud data loss prevention is an evolving facet of cybersecurity. Initiate the process by securing a subset of the most critical data within your organization. This approach simplifies the initial implementation and management, and a successful pilot program can pave the way for the expansion of the initiative across the organization.

2.4.9 Cloud access security broker (CASB) •

CASB-may deliver security, management or both



Big Role in Security – can deliver security, management, or both



May use APIs, agentless vs agent based

A cloud access security broker (CASB), sometimes pronounced as "cas-bee," refers to either onpremises or cloud-based software positioned between cloud service users and cloud applications. Its primary function is to oversee all activities and enforce security protocols. CASBs provide an array of services, including user activity monitoring, alerting administrators about potentially risky actions, ensuring compliance with security policies, and automatically thwarting malware threats. CASBs can offer security services, management services, or a combination of both. In a broader context, "security" pertains to the prevention of high-risk incidents, while "management" encompasses the monitoring and mitigation of such high-risk events.CASBs that deliver security must be in the path of data access, between the user and the cloud. Architecturally, this might be achieved with proxy agents on each end-point device, or in agentless fashion without requiring any configuration on each device. Agentless CASBs allow for rapid deployment and deliver security on both company-managed and unmanaged BYOD devices. Agentless CASB also respect user privacy, inspecting only corporate data. Agent-based CASBs are difficult to deploy and effective only on devices that are managed by the corporation. Agent-based CASBs typically inspect both corporate and personal data.

© Secbay Press

103 | Page

Secbay Press

CASBs that provide management functionalities might employ APIs to retrospectively examine data and activities in the cloud, issuing alerts regarding potentially risky incidents. Another management capability of CASBs involves scrutinizing firewall or proxy logs to track the utilization of cloud applications. API-exclusive CASBs like Cloudlock, for instance, utilize APIs offered by leading Software as a Service (SaaS) applications to deliver management capabilities. In contrast, multi-mode CASBs furnish a combination of management and security features. Certain multi-mode CASBs may even incorporate Zero-Day capabilities designed to safeguard against both recognized and unforeseen threats.

2.5 Given a scenario, implement measures to meet security requirements. 2.5.1 Tools Vulnerability scanners •

Nessus – Tenable vulnerability scanner



Nmap- nmap.org, port scanner and vulnerability scanner



Nikto – web vulnerability scanner comes with Kali Linux

Vulnerability scanners are technologies that can automatically scan your cloud infrastructure's resources for potential vulnerabilities. These scanners use well-known exploits to test cloud resources for vulnerabilities and create results. The most well-known forms of vulnerability scanners are associated with the Open Web Application Security Project (OWASP), a group that promotes the development of tools and methods to improve software security. Other vulnerability scanners are discussed, such as default and common credential scans, credentialed © Secbay Press

104 | Page

Secbay Press scans, network-based scans, and agent-based scans.About 90% of vulnerabilities can only be detected via credentialed scanning. Furthermore, it can help you adhere to compliance and regulatory standards that require credentialed vulnerability assessment in order to fulfill specific requirements within the benchmark. Scanning on a regular basis with credentials is essential. Comprehensive credentialed scanning means more thorough assessment (a typical credentialed scan of a Windows host yields 7x the results compared to a non-credentialed scan), along with recommendations for discovered flaws. Nessus Professional allows for credentialed scans that serve Windows and SSH hosts, cloud platforms (Amazon Web Services, Azure and Salesforce), databases (MySQL, MongoDB and more), Microsoft SCCM patch management and numerous plaintext authentication frameworks ranging from HTTP to FTP Port scanners •

TCP –Transmission Control Protocol



UDP – User Datagram Protocol



IANA Port List



Well Know Ports- 0 to 1023



Registered Ports 1024-49151



Dynamic or Ephemeral Ports 49152-65535

Every computer possesses an Internet Protocol (IP) address, serving as a means for the network to determine which computer should receive packets. When a packet is dispatched to a specific IP address, the computer relies on either the application or packet contents to determine the appropriate port for routing the packet. Every service running on the computer must be in a "listening" state on an assigned port. The initial 1023 TCP ports are renowned as well-known ports, reserved for applications such as FTP(21), HTTP(80), or SSH(22). The Internet Assigned Numbers Authority (IANA) oversees these ports to ensure standardization and uniformity. TCP ports ranging from 1024 to 49151 are available for utilization by various services or applications, and they can be registered with IANA, classifying them as semi-reserved. Ports numbering 49152 and beyond are unrestricted and free for use. Port Scanning •

Nmap- network mapper



Windows packet sniffer pktmon



Angry IP scanner



Port Responses: •

Open, Accepted



Closed, Not Listening

© Secbay Press

105 | Page

Secbay Press •

Filtered, Dropped, Block

A port scanner is a computer program designed to examine network ports and determine one of three possible statuses: open, closed, or filtered. Port scanners serve as valuable tools for diagnosing network-related problems and connectivity issues. Nevertheless, malicious actors employ port scanners to pinpoint potential entry points for unauthorized access and to ascertain the types of devices running on the network, such as firewalls, proxy servers, or VPN servers. In this context, we'll explore the intricacies of port scanners, including: A port scanner initiates a network request aimed at establishing a connection with a specific TCP or UDP port on a computer and records the ensuing response. The primary function of a port scanner is to dispatch a packet of network data to a designated port to assess its current status. For instance, to verify the proper operation of a web server, one would inspect the status of port 80 on that server to confirm that it is open and actively listening. This status information assists network engineers in diagnosing network problems, resolving application connectivity issues, and, regrettably, aids attackers in identifying potential ports for illicit entry into a network.

Port Scanning Basics A port scanner sends a TCP or UDP network packet and asks the port about their current status. The three types of responses are below: 1. Open, Accepted: The computer responds and asks if there is anything it can do for you. 2. Closed, Not Listening: The computer responds that “This port is currently in use and unavailable at this time.” 3. Filtered, Dropped, Blocked: The computer doesn’t even bother to respond. Pktmon Packetmon was initially introduced in Windows 10 and Windows Server 2019 as part of the version 1809 (October 2018 update). Since its inception, PacketMon's functionality has continued to evolve across subsequent Windows releases. Below, we outline some of the key capabilities and constraints of PacketMon in Windows 10 and Windows Server 2019, specifically in version 2004 (May 2020 Update). Capabilities: •

Packet capture is feasible at various points within the networking stack.



Detection of packet drops, with the ability to report drop reasons.



Runtime packet filtering, complemented by encapsulation support.



Flexible packet counters.



Real-time on-screen packet monitoring.

© Secbay Press

106 | Page

Secbay Press •

Proficient handling of high volumes of in-memory logging.



Compatibility with Microsoft Network Monitor (NetMon) and Wireshark (pcapng) formats.

Limitations: • • •

Support is restricted to Ethernet media type exclusively. Absence of integration with the Firewall. Drop reporting is solely available for supported components.

2.5.2 Vulnerability assessment • Intelligence Gathering • Vulnerability Assessment • Vulnerability Validation • Default and credential scan • Credentialed scan • Un credentialed scan The purpose of employing vulnerability scanners is to produce a vulnerability assessment. This report is a review of a system's potential flaws. The evaluation is divided into four stages. ●

Identification: During this stage, vulnerability scanners are used.



Analysis: Using the data acquired from the vulnerability scanners, you establish the root cause of the weaknesses in this phase.



Risk assessment: In this phase, you use a ranking score to decide which vulnerabilities are the most severe and develop a plan to fix these weaknesses.



Remediation: During this step, you put the plan into action.

Default and common credential scans A credential scan (not to be confused with a credentialed scan, which will be explained later) searches for common user accounts. The root user account, for example, is the most frequent user account on a Linux-based system since it is present by default on Linux systems (it also provides the highest privileged access to the system). Brute-force scans will be used by attackers to identify which user accounts exist. This method works because many authentication programs display a different error message when an account does not exist vs when an incorrect password is entered during a login attempt. After discovering that a system has an account with a certain name, the attacker launches a second attack to guess the password for the account. Credentialed scans Even though the names are similar, a credentialed scan differs from a credential scan. In a credentialed scan, vulnerability scanning tools are used by an account that is logged in to the system (or network) being examined. For example, if you wanted to uncover vulnerabilities in your own cloud resource, you might utilize a distant machine to execute a scan, or you could log in to the resource using a credential (user account) that you have access to. A credentialed scan identifies what would be susceptible if an

© Secbay Press

107 | Page

Secbay Press attacker gained access to the system (successfully attack a user account). Network-based scans Agent-Based Scanning - Nessus Agents represent lightweight software installed directly on a host, which can be a laptop, virtual system, desktop, or server. These agents are tasked with receiving scanning directives from a central Nessus Manager server, executing scans locally, and subsequently transmitting vulnerability, compliance, and system findings back to the central server. Recently, Tenable introduced Nessus Agents as a novel scanning approach within the enhanced Nessus Manager, tailored to meet the requirements of enterprise users. This article will introduce you to Nessus Agents, suggest various scenarios in which they can prove advantageous, and guide you on initiating their utilization. Nessus Agents, serving as unobtrusive software, are installed locally on host devices. They serve as a valuable solution for addressing common challenges associated with traditional network scanning, such as scanning in the absence of credentials or assessing assets that may not be consistently connected to the local network. ●

No credentials required with agent-based scanning



Discovers outdated network services, missing security patches, badly configured servers and many other vulnerabilities.



The Network Vulnerability Scanner with OpenVAS (Full Scan) is our solution for assessing the network perimeter and for evaluating the external security posture of a company.



The scanner offers a highly simplified and easy-to-use interface over OpenVAS, the best opensource network security scanner.



It performs an in-depth network vulnerability scan by using more than 57,000 plugins. See the Technical Details below.

The Lightweight scanner, available as a complimentary and highly efficient online utility, is designed to identify CVEs (Common Vulnerabilities and Exposures) that impact the network services of a target system. This identification is primarily based on the version of these services, for instance, Apache 2.4.10. The scanning process commences by detecting the accessible ports and services. Subsequently, it proceeds to query a database to identify established vulnerabilities that could potentially affect the scanning process. Agent-based scans An agent-based scan is run from a locally installed program on a PC. It is frequently likened to a credentialed scan, although an agent-based scan does not require a credentialed account and instead operates as a system program. It is also usually executed with administrator credentials, thus it may be able to do deeper scans than a credentialed scan. Finally, it is frequently scheduled to run on a regular basis, whereas a credentialed scan is generally initiated manually by a user.

© Secbay Press

108 | Page

Secbay Press Service availabilities A service availability scan is used to determine if a certain service (web server, database server, mail server, etc.) is available. It can be executed from within or outside the cloud architecture, depending on whether the service being scanned should be offered from within or outside the cloud. Cloud computing can be complex! As you have learned throughout this study guide, there are a lot of different components and advanced technologies that make up the cloud. It is not uncommon to see a public cloud provider offering more than 100 different services. The combinations of all these offerings are practically limitless. With this flexibility and sophistication comes the reality that there are a lot of things that can go wrong. Not only can there be problems, there often are. In this context, "availability" refers to the assurance provided by the service provider regarding the accessibility of your data and services. Typically, this assurance is quantified as a percentage of time per annum. For instance, an uptime commitment of 99.999% (often referred to as "five nines") signifies that you should not experience unavailability for more than approximately five minutes per year.

2.5.3 Security patches •

Patch Management Plan hypervisors, VMs, virtual appliances, software, clustered systems, networking equipment



Patching Tools -



Hardening- reduce attack surface and may be specific to device or service



Hotfixes patch to fix immediate and specific problem with a quick fix procedure • • •

Scheduled Patches Virtual Patches Signature Patches

Maintain firmware and patch levels Security holes are continually discovered in both software and firmware, and vendors release patches as quickly as they can to respond to those discoveries. Enterprises as well as cloud providers need to apply these patches to be protected from the patched vulnerabilities. To protect network resources from threats, secure network design employs multiple overlapping controls to prevent unwanted access to protected cloud resources. Some layered security components include demilitarized zones, ACLs, and intrusion detection and prevention systems. Hardening is the process of ensuring that a host or guest is not vulnerable to compromise. Logging must be enabled to track potential intrusions. Only the required software components should be installed on the system, software patches should regularly be applied, firewall and antimalware software should be functional and up to date, and any unused user accounts should be disabled or removed. All software has bugs that must be patched to prevent bad things from happening. As events unfold, the discovery of new bugs is a common occurrence. Frequently, the particulars surrounding these bugs are either publicly disclosed or inadvertently made available to the media. Individuals with less-thanhonorable intentions can exploit this information to devise malicious exploits. They do so with the awareness that a significant number of systems remain outdated and lack adequate protection. It is © Secbay Press

109 | Page

Secbay Press important to keep on top of the security landscape and have a planned approach to patch management. This section will focus on securing your cloud fleet of servers and services as it relates to keeping your systems up to date. As part of a shared security model, the cloud provider will secure the infrastructure, and the cloud customer will be required to address security concerns based on the service model they are operating under. A complete patch management approach must be developed, implemented, and adhered to prevent a security breach. Patching Tools know the systems that need patch management Know the common systems that need ongoing patch management. The most common systems that are operating in the cloud are hypervisors, virtual machines, virtual appliances, software applications, and clustered systems. Also, networking components, such as switches, routers, load balancers, firewalls, NAT systems, and many others, will need to be patched to remain current and secure. Storage systems are also included here. Hardening These processes can be rather extensive and are specific to the device or service. It is suggested that you follow the vendor or cloud provider's best practices on hardening each specific device or service. The best practices are usually published on the support websites or offered as white papers. Follow the hardening procedures exactly as outlined, and do not hesitate to contact the document publisher should you have any questions. This is not where you want to guess or make assumptions! After the hardening process has been completed, you must validate the configurations. Here, it is suggested that you consider contracting a third-party security testing company for impartial validation services. Scheduled Patch A patch is an update that fixes a known bug or issue. The patch is a piece of software that is intended to update an application, operating system, or any other software-based system to fix or improve its operations. Generally, patches are synonymous with fixes for security vulnerabilities or any other type of operational bug. Patches do not offer feature enhancements but rather fixes or performance enhancements. Patches can be thought of as bug fixes or "improving usability and performance" or "removing an undocumented feature"! Hot fixes A hotfix is a software update type that is intended to fix an immediate and specific problem with a quick release procedure. A hotfix may be customer-specific and not released to the public or available to everyone. Many times a hotfix is a bug fix that has been made quickly and did not follow the normal quality assurance or formal release procedures since the intention is for a rapid deployment. Because of the urgent nature of supplying a bug fix to a critical issue, a hotfix has a higher risk factor than other approaches to patching software. Testing the patch is usually quick using a subset of the full QA test suites. Installing a hotfix may introduce new and unknown issues into your environment. The risk of installing a hotfix to correct a bug and introducing new problems must be weighed against not taking action and living with the bug until all testing has been completed.

© Secbay Press

110 | Page

Secbay Press Scheduled updates As previously stated, a planned update is not a patch, but it may contain code that fixes vulnerabilities in the same way that a patch does. The vulnerabilities were most likely resolved in prior patches, however a new vulnerability is occasionally fixed with a scheduled update. Virtual patches Virtual fixes do not directly address vulnerabilities, but rather employ another technology, such as a web application firewall (WAF), to give a short-term solution to the problem. A virtual patch attaches a tiny program to the software with the objective of limiting the access that the vulnerability is now allowing. Signature updates The word "signature update" is related with antivirus software. A viral signature is similar to a virus's fingerprint. This signature is used by antivirus systems to identify whether a virus has infected a system. Keeping antivirus signatures up to date on a machine is a top priority. Rollups A rollup is a grouping of hot fixes. In certain circumstances, the rollup may include more than simply security updates, however the primary goal is to address a group of security or serious concerns with a single update.

2.5.4 Risk register ● ●

Risk Management- cumulate efforts to mitigate risk to near zero as possible, can’t eliminate all risk Risk Register used to identify risks for an organization sometimes to fulfill regulatory compliance.

As part of a risk management evaluation for cloud computing, it becomes essential to prioritize the positive information security advantages gained from the utilization of cloud infrastructure. In this context, the primary focus is on risks associated with public cloud environments, unless otherwise specified. A risk register serves as a valuable tool in both risk management and project management. Its purpose is to identify potential risks within a project or an organization. While regulatory compliance may necessitate its use, it primarily enables proactive monitoring of potential issues that could disrupt intended outcomes. You can plan for project-related risks effectively using this risk register template in Excel. It helps define the priority of risks and assess their potential impact. While risks are an inherent part of every project, acknowledging their inevitability is key. A proficient project manager recognizes the importance of anticipating risks and devising strategies to address any potential challenges that may arise during the project's course. However, thorough preparation alone is insufficient. It is crucial to be ready to respond to risks as they materialize. Therefore, having a mechanism in place to track risks throughout the project's lifecycle is vital. If a risk initially goes unnoticed, it presents a problem, but if it is not monitored once identified, it can escalate into a significant issue.

© Secbay Press

111 | Page

Secbay Press In the context of PRINCE2 (Projects IN Controlled Environments), a risk register is a document employed as a risk management tool. It also serves the purpose of ensuring compliance with regulatory requirements by acting as a repository for all identified risks. The risk register contains additional information about each risk, such as its nature, reference, ownership, and mitigation measures.

2.5.5 Prioritization of patch application •

Patching 101



Prioritizing Patches- patches can vary in applications and severity •

Production applications may dictate priority

Patching 101 Software applications ranging from critical infrastructure (power grids, financial systems) to operating systems (iOS, Microsoft Windows), to consumer applications (personal finance, healthcare, payment systems) rule our lives. And, because software development is a human endeavor, it is inherently prone to error. Vulnerabilities that get introduced in software development can be exploited by malicious actors to bring down critical assets, steal sensitive data, and damage a company’s brand. To systematically reduce the risk of such breaches, software vendors or third parties periodically release patches specific to vulnerabilities that, when applied, render the software robust against known exploits. The practice of remediating known vulnerabilities by timely discovery and patching has come to be known as vulnerability management.

2.5.6 Deactivate default accounts Deactivate Default Accounts this is a best practice to reduce attack surface because default accounts are freely available on the web. Impact of Security Tools on Systems and Services security tools may have negative impact on availability and network. Effects of Cloud Service Models on System Implementation Disable an IAM service account. After the service account is disabled, credential generation and API requests using this service account will fail. Using cloud IAM service-accounts enables us to re-enable it. POSITIONAL ARGUMENTS Service Account resource - The IAM service account to disable. This represents a Cloud resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways. To set the [project] attribute: provide the argument [service_account] on the command line with a fully specified name; set the property [core/project]; provide the argument [--project] on the command line. This must be specified. SERVICE_ACCOUNT

2.5.7 Impacts of security tools on systems and services Installing and utilizing security technologies may appear to be a no-brainer, but they do have a possible cost. One of the most serious worries is that an attacker may obtain access to your system using a regular user account and then utilize security tools to uncover weaknesses in your environment. Security tools may potentially influence resource performance. A thorough port search, for example, © Secbay Press

112 | Page

Secbay Press may result in network connectivity difficulties on systems within the network, some of which are production systems. Security software may sometimes provide the user with a false impression of...well, security. "We have these fantastic instruments that keep us safe, therefore we aren't at risk," the notion goes. Security tools are simply one component of a good security policy and should not be used in isolation. Some security measures are proactive, which means they may proactively shut down a system or service if a vulnerability is found, such as Internet Protocol Security (IPS) or next-generation firewalls (NGFWs). On the one hand, this is a beneficial thing because it limits a system's vulnerability. On the other side, this might result in the inability of a system or service, which could have a more negative impact than if the vulnerability was simply notified and treated manually (especially in cases of false positives).

2.6 Explain the importance of incident response procedures. 2.6.1 Preparation Documentation Have a plan for incident response. Incidents can range from anything to hardware failure to disgruntled employees to nation state hacking Response Plan/Strategy – after establishing organizational policies, now it is time to create a plan/strategy to handle incidents. The prioritization of incidents should hinge on the organizational impact, as outlined in the Incident Response Process of 2008. For instance, a minor incident might involve a single non-functional workstation, whereas a moderate impact incident could be characterized by the unavailability of a server, assuming there are backup servers for failover. On the other hand, a high-impact incident might entail the theft of sensitive information from the human resources department, which contains privileged data. Aligning the categorization of incidents with their respective organizational impacts can serve as a compelling argument to garner management support. Without the backing of management, it is probable that the Computer Incident Response Team (CIRT) may not receive the necessary resources to effectively manage a crisis. All of the following should be included in good documentation: ● ● ● ● ● ● ● ●

Verify the correctness of the information. Keep up with the times. Make yourself easily approachable. Be available from several sources. Be succinct and straightforward. Be evaluated ahead of time by the incident response team. Include tasks and roles. Include a communication strategy.

© Secbay Press

113 | Page

Secbay Press Tabletops A tabletop exercise involves gathering members of an incident response team and presenting them with a scenario. The purpose is to go over the measures that would be followed to deal with the occurrence. This exercise is termed a tabletop exercise because it was originally completed with all participants in the same room, usually a conference room, and all work was done "on the tabletop." In other words, this exercise does not include any real acts, but rather a verbal tour of the events that should occur. ●

An employee casually remarked on how kind state authorities are in providing the handful of USB drives with the state emblem on the conference room table. After some research, you discover that there is no state program to supply USB drives to staff.



Your agency has received several complaints concerning poor Internet connectivity and the inaccessibility of your website. Following additional analysis, it has been established that your organization is the target of a DNS amplification assault, which is now overloading your DNS server and network capacity.



According to local news sources, a big chemical facility two miles distant has had a significant dangerous chemical spill. There is a chemical cloud, and your office building is directly in its path.

Documented incident types/categories The goal of defining incident types or incident categories is to develop a system for categorizing events. For example, you might construct an event category named "natural catastrophes," which would cover occurrences like fires, floods, and earthquakes, but not an incident like a hacking attempt. Creating incident categories is necessary for a number of reasons, including the following: Impact assessment: Varied sorts of incidents will have different effects on your business. A natural disaster, for example, may damage the entire business, but a hacking effort may solely harm the IT department. Furthermore, some instances will have long-term consequences, and others may have short-term consequences. Determining the response order: You might also select to categorize incidences based on their severity. Assume the following two incidences happen at around the same time: (1) An employee misplaces a corporate laptop containing potentially sensitive information, and (2) the company's central data center sustains significant fire damage. Of the two occurrences, you'd probably rank the data center damage as more important than the missing laptop. Without classifying incidents based on severity, the priority may be given to the less severe incidence. Assisting in the identification of organizational weaknesses: It may be easier to identify your company's weak points if occurrences are organized into categories. The rationale for this is that analyzing events by category will reveal which sorts of incidents occur more frequently, as well as how long it takes to respond to and address each incident type. Helping to determine the response level: Categories are frequently based on the severity of the occurrence. This method of categorizing helps you to decide the proper reaction to an occurrence.

© Secbay Press

114 | Page

Secbay Press

Roles and Responsibilities The tasks that must be completed when dealing with an event are classified as roles. In most cases, a role is related with a position in an organization (for example, a customer support manager). During the event, each job is assigned certain tasks. The following are some examples of roles: ●

Manager of Incidents



Coordination of the incident



Technical supervisor



Manager of Communications



Incident commander



Agent at the service desk



Manager of the service desk



Manager of customer service images



SME in Images (subject matter expert)



Documentary Photographer (also called the scribe)



Photographer

A common practice involves utilizing an RACI matrix to elucidate the responsibilities of each role. RACI stands for Responsible, Accountable, Consulted, and Informed, and this matrix provides a © Secbay Press

115 | Page

Secbay Press straightforward means of determining the roles responsible for various tasks or activities..

Figure 26 Sample RACI Matrix

2.6.2 Incident response procedures Planning is an important aspect of incident response, but it is pointless if the appropriate measures are not performed when the incident happens. Typically, the incident-handling procedure is divided into phases. Different organizations have divided these periods in various ways. The SANS (SysAdmin, Audit, Network, and Security) Institute, for example, has defined the six phases of Incident Response as follows. ● ● ● ● ● ●

Preparation Identification Containment Eradication Recovery Lessons Learned

Identification ●

Validation attack host IP address



Research Attacking Host



Using Incident Databases



Monitoring Possible Attack Comms



Scope

© Secbay Press

116 | Page

Secbay Press To respond to an event, you must first recognize it. Sometimes identifying something is simple; for example, a power outage is simple to recognize. In other circumstances, the incident may not be detected unless specialized monitoring procedures are implemented. It is preferable to gather as much information as possible when recognizing an occurrence. Specific questions such as the following should be included on an identity report sheet: ●

When did the event occur?



What caused the event to be discovered?



Who (or what) was the first to notice the incident?



What effects has the incident had?



What impact does the event have on users and services?



What was the point of entry if a breach occurred?



What is the extent of the violation?

Scope Investigation •

Evidence Gathering and Handling



Identifying Information



Collecting Evidence

Law Enforcement One reason that many security-related incidents do not result in convictions is that some organizations do not properly contact law enforcement. Several levels of law enforcement are available to investigate incidents: for example, within the United States, Federal investigatory agencies (e.g., the Federal Bureau of Investigation [FBI] and the U.S. Secret Service), district attorney offices, state law enforcement, and local (e.g., county) law enforcement. Law enforcement agencies in other countries may also be involved, such as for attacks launched from or directed at locations outside the US. In addition, agencies have an Office of Inspector General (OIG) for investigation of violation of the law within each agency. The incident response team should become acquainted with its various law enforcement representatives before an incident occurs to discuss conditions under which incidents should be reported to them, how the reporting should be performed, what evidence should be collected, and how it should be collected. Law enforcement should be contacted through designated individuals in a manner consistent with the requirements of the law and the organization’s procedures. Numerous organizations opt to designate a single member of the incident response team as the primary Point of Contact (POC) for interactions with law enforcement. This individual should possess a sound understanding of the reporting procedures applicable to all pertinent law enforcement agencies and be well-prepared to suggest whether any agency should be contacted. It is important to note that reaching out to multiple agencies should generally be avoided, as it may lead to jurisdictional conflicts. The incident response team must be knowledgeable about potential jurisdictional issues, such as those related to

© Secbay Press

117 | Page

Secbay Press physical location. For instance, if an organization is based in one state, but an attack on a server in a second state originates from a system in a third state, used remotely by an attacker in a fourth state, understanding the potential complexities of jurisdiction is crucial. Containment •

Tailored Information Strategy



Vary Based on type of incident



Potential Damage and Theft of Resources



Need for Evidence Preservation



Effectiveness / Duration of Strategy

Choosing a Containment Strategy Containment is a critical step that must be taken before an incident spirals out of control, overwhelming resources and causing more damage. In fact, most incidents necessitate containment, making it a crucial consideration early in the incident handling process. The act of containment affords valuable time to develop a tailored remediation strategy. An integral component of containment is decision-making, which may involve actions like shutting down a system, disconnecting it from a network, or disabling specific functions. These decisions become significantly more straightforward when predefined strategies and procedures for containment are in place. It's essential for organizations to define acceptable risks when dealing with incidents and formulate strategies accordingly. Containment strategies can vary depending on the type of incident. For instance, containing an emailborne malware infection requires a markedly different approach than handling a network-based Distributed Denial of Service (DDoS) attack. Organizations should establish distinct containment strategies for each major incident type, complete with well-documented criteria to facilitate decisionmaking. Some of the criteria for determining the appropriate strategy include: ● ● ● ● ● ●

The potential for resource damage and theft The need to preserve evidence Service availability considerations (e.g., network connectivity, services provided to external parties) The time and resources required to implement the strategy The effectiveness of the strategy (e.g., partial containment versus full containment) The expected duration of the solution (e.g., an emergency workaround to be removed in four hours, a temporary workaround to be removed in two weeks, a permanent solution).

In certain cases, organizations may opt to redirect attackers to a sandbox environment as a form of containment. This allows them to monitor the attacker's activities, often with the aim of gathering additional evidence. However, it's advisable for the incident response team to engage in discussions with their legal department to determine the feasibility of this strategy.

© Secbay Press

118 | Page

Secbay Press Eradication •

Eliminate Components of Incident



Delete Malware



Disable Breached Accounts



Restore System to Normal Operation

Following the containment of an incident, the eradication phase may become necessary to eliminate various components associated with the incident. This phase involves tasks such as removing malware, disabling compromised user accounts, and addressing vulnerabilities that were exploited during the incident. Eradication also entails identifying all affected hosts within the organization, ensuring that they are remediated appropriately. In some cases, eradication may not be required separately and can be integrated into the recovery phase. During the recovery phase, system administrators work to restore systems to their normal operation, verify their proper functioning, and, if applicable, take measures to address vulnerabilities and prevent similar incidents. Recovery activities may encompass actions like restoring systems from clean backups, rebuilding systems from scratch, replacing compromised files with clean versions, installing patches, changing passwords, and enhancing network perimeter security through means such as firewall rulesets and boundary router access control lists. Additionally, heightened levels of system logging and network monitoring are often integrated into the recovery process. It is important to conduct eradication and recovery in a phased approach, prioritizing remediation steps based on their significance. In cases of large-scale incidents, the recovery process may span several months. During the initial phases, the focus should be on implementing relatively quick (taking days to weeks) high-value changes aimed at enhancing overall security and preventing future incidents. As the recovery process progresses, later phases can shift attention to longer-term changes, such as infrastructure adjustments, and ongoing efforts to maintain the enterprise's security at an optimal level. Recovery One of the most important parts often most omitted • • • • •

Learning and improving What went well/ what didn’t go well? How well did staff perform ? What could have been done differently ? Additional tools needed?

One of the often-overlooked yet critical aspects of incident response is continuous learning and improvement. Every incident response team should adapt to evolving threats, advancements in technology, and the valuable lessons learned from each incident. A pivotal component of this process involves conducting "lessons learned" meetings, particularly after significant incidents, and optionally, following smaller incidents when resources allow. These meetings serve as a valuable tool for enhancing security measures and refining the incident handling procedures. Multiple incidents can be addressed within a single lessons learned meeting. This forum offers an opportunity to gain a comprehensive understanding of an incident by reviewing its timeline, the actions taken to mitigate it, and the effectiveness of those interventions. It is advisable to schedule the meeting © Secbay Press

119 | Page

Secbay Press shortly after the conclusion of the incident. During these sessions, various questions should be addressed, including: ● ● ● ● ● ● ● ●

A detailed account of what transpired, including specific times. An evaluation of the performance of staff and management in responding to the incident, encompassing adherence to documented procedures and their sufficiency. Identification of any crucial information that should have been acquired sooner. An assessment of any actions taken that may have hindered the recovery process. Deliberation on what changes the staff and management would implement when dealing with a similar incident in the future. Consideration of how information sharing with other organizations could be enhanced. Exploration of corrective actions that can prevent the recurrence of similar incidents. Examination of potential precursors or indicators to monitor for detecting similar incidents in the future.

These lessons learned meetings serve as a valuable feedback mechanism to enhance incident response capabilities and overall security posture. Isolation • • •

Admissibility in court –goal of collecting evidence “do no harm” to the investigation List of Evidence Chain of Custody- forms designed to show no improper tampering was done

When considering the use of a device for evidence collection, it's crucial to assess the admissibility of the evidence it gathers. For instance, devices that could potentially serve as sources of evidence should not be utilized to record additional evidence. NIST SP 800-86, titled "Guide to Integrating Forensic Techniques Into Incident Response," offers comprehensive guidance on establishing a forensic capability, encompassing the formulation of policies and procedures. While the primary objective of gathering evidence during an incident is to resolve the incident itself, there may also be a need for it in legal proceedings. In such instances, meticulous documentation of the preservation of all evidence, including compromised systems, is essential. Evidence collection should adhere to procedures aligned with applicable laws and regulations. These procedures should be developed in consultation with legal professionals and relevant law enforcement agencies to ensure the admissibility of the evidence in court. Moreover, a stringent chain of custody process should be in place for evidence. Whenever evidence is transferred from one person to another, detailed chain of custody forms should record the transfer, accompanied by the signatures of each involved party. A comprehensive log should maintain records for all evidence, including: ● ● ● ●

Identifying information (e.g., location, serial number, model number, hostname, media access control (MAC) addresses, and IP addresses of a computer). Name, title, and contact information of each individual responsible for collecting or handling the evidence during the investigation. Date and time (including time zone) of each instance of evidence handling. Locations where the evidence was stored.

© Secbay Press

120 | Page

Secbay Press Collecting evidence from computing resources presents specific challenges. Ideally, evidence should be acquired from the targeted system as soon as there is suspicion of an incident. Many incidents trigger a dynamic sequence of events, and an initial system snapshot can be instrumental in identifying the issue and its source. From an evidentiary perspective, it is preferable to capture a snapshot of the system in its current state, as opposed to doing so after various individuals, such as incident handlers and system administrators, have inadvertently altered the machine's state during the investigation.. Evidence Acquisition. Evidence gathering serves several functions. This evidence can be utilized to avoid future events or to make future incident recovery easier and faster. The evidence may also be utilized to deal with internal disciplinary proceedings or to pursue legal action against an individual or another company. Chain of Custody The chain of custody is a document that specifies how evidence is handled throughout the evidence collection lifecycle. This document is essential for pursuing legal action against someone or reporting potential offenses to the authorities. Lessons Learned and Post-Incident Your task isn't finished yet. Following the resolution of the event, the outcomes must be documented and examined. Many organizations do a postmortem in which they discuss the incident, its cause, and the impact it had on the organization. Based on this data, the portmortem team compiles a report detailing what was learnt throughout the incident response process and what steps should be made to avoid similar issues. Root Cause Analysis The determination of the root cause is a vital component of the post-incident procedure. The incident is investigated to find the root cause, which may then be utilized to decide what lessons were learnt from the incident and how to avoid such situations in the future.

© Secbay Press

121 | Page

Secbay Press

Practice Questions QUESTION NO: 1 _____ leaves the job of securing a resource to the user rather than the administrator. A. RBAC B. MAC C. DAC D. Nondiscretionary access control

QUESTION NO: 2 A policy is used in systems that use ____ to determine which accounts have access to which resources. A. RBAC B. MAC C. DAC D. nondiscretionary access control

QUESTION NO: 3 A _____is a list of certificates that the CA no longer considers valid. A. SSO B. PKI C. MFA D. CRL

© Secbay Press

122 | Page

Secbay Press QUESTION NO: 4 Users are assigned a role in _____, which defines what they can do. A. nondiscretionary access control B. MAC C. RBAC D. DAC

QUESTION NO: 5 LDAP is used by which of the following software components? A. Open Directory Service B. Standard Directory Service C. Red Hat Directory Service D. Microsoft's Active Directory

QUESTION NO: 6 Which of the following MFA categories is correct? A. Something that the user can calculate B. Something that the user knows C. Something that the user can acquire D. Something that the user has

QUESTION NO: 7 A ____ firewall does not take into account any connections or communications established within the organization. A. subtle B. ethereal C. stateless D. static © Secbay Press

123 | Page

Secbay Press QUESTION NO: 8 A packet ____ will receive network traffic, analyze it, and manage the traffic flow. A. repeater B. router C. broker D. switch

QUESTION NO: 9 A ______ proxy is designed to work on the client side of the communication. A. reverse B. tunneling C. range D. forward

QUESTION NO: 10 Which of the following technologies makes use of insecure protocols or ciphers? A. TLS B. IMAP C. 3DES D. SSH

QUESTION NO: 11 Which of the following solutions is more secure than standard DNS? A. DoG B. DoE C. DoH D. DoT

© Secbay Press

124 | Page

Secbay Press QUESTION NO: 12 Which of the following terms describes when a bogus DNS server returns incorrect data in response to a query? A. DNS pirate B. DNS hoax C. DNS cache poisoning D. DNS spoofing

QUESTION NO: 13 A ____ release is one that is ready for use in a production environment. A. LTS B. canary C. stable D. beta

QUESTION NO: 14 Directory services, which are typically found on Linux-based systems, use ____ to store enterprisewide information such as user account data. A. ASET B. DONE C. LDAP D. DIRSERV

© Secbay Press

125 | Page

Secbay Press QUESTION NO: 15 What are the protocols associated with federation identity management? A. OpenLDAP B. SAML2 C. LDAP D. OAuth2

QUESTION NO: 16 A(n) _____ and _____ provide a way to grant user accounts access to cloud resources. A. permission B. type C. role D. privilege

QUESTION NO: 17 Which of the following are account lifecycle management stages? A. Deactivation B. Logging C. Creation D. Auditing

QUESTION NO: 18 IPsec operates at which layer of the OSI model? A. Layer 1 B. Layer 3 C. Layer 4 D. Layer 2

© Secbay Press

126 | Page

Secbay Press QUESTION NO: 19 Which company invented generic routing encapsulation (GRE)? A. Cisco B. Google C. IBM D. Microsoft

QUESTION NO: 20 Which of the following best describes the L2TP encryption key size? A. 1024-bit B. 128-bit C. 256-bit D. 512-bit

QUESTION NO: 21 Which network segmentation technology supports up to 16 million network segments? A. VxLAN B. XVLAN C. VLAN D. VLANx

QUESTION NO: 22 Which of the following network segmentation techniques is most commonly used? A. LANX B. VLAN C. GENEVE D. all the above

© Secbay Press

127 | Page

Secbay Press QUESTION NO: 23 A ____ is a system that monitors the state of the system to determine if an intrusion has occurred but does not take any action if an instruction is detected. A. HIDS B. EDR C. Antivirus D. HIPS

QUESTION NO: 24 Prior to a software build becoming generally available (GA), earlier releases are referred to as ____ builds. A. beta B. canary C. stable D. LTS

QUESTION NO: 25 In relation to security, to access cloud resources, a user must have the right _____ for that resource. A. name B. understanding C. permissions D. location

© Secbay Press

128 | Page

Secbay Press

Answers to Practice Questions 1. Answer: C Explanation: Discretionary access control (DAC) delegated the task of securing a resource to the user rather than the administrator. Users are assigned a role in role-based access control (also known as RBAC) to define what they are able to do. Nondiscretionary access control entails creating rules that govern which user accounts have access to resources, such as the time of day. A policy is used in MAC systems to determine which accounts have access to which resources.

2. Answer: B Explanation: A policy is used in MAC systems to determine which accounts have access to which resources. Users are assigned a role in role-based access control (also known as RBAC) to define what they are able to do. Discretionary access control (DAC) delegated the task of securing a resource to the user rather than the administrator. Nondiscretionary access control entails creating rules that govern which user accounts have access to resources, such as the time of day.

3. Answer: D Explanation: A CRL (certificate revocation list) is a list of certificates that the CA believes are no longer valid. SSO is a technique in which, after logging in to an environment, the user is not prompted to provide account credentials to access other components in the environment until a timeout period has passed. MFA is a method of authenticating a user that requires more than one method of verifying a user's identity. PKI is a standard that governs the creation, revocation, management, storage, use, and distribution of digital certificates.

4. Answer: C Explanation: Users are assigned a role in role-based access control (also known as RBAC) to define what they are able to do. Discretionary access control (DAC) delegated the task of securing a resource to the user rather than the administrator. Nondiscretionary access control entails creating rules that govern which user accounts have access to resources, such as the time of day. A policy is used in MAC systems to determine which accounts have access to which resources.

5. Answer: C,D Explanation: OpenLDAP and Red Hat Directory Service are two common LDAP-based directory services on Linux systems. LDAP is also an important component of Microsoft's Active Directory (AD) software.

© Secbay Press

129 | Page

Secbay Press 6. Answer: B,D Explanation: The "factor" in multifactor authentication, or MFA, refers to the additional item needed to authenticate the user. This factor can fall into one of three categories: something the user owns, something the user knows, or something the user is.

7. Answer: C Explanation: A stateless firewall does not take into account any connections or communications established within the organization. For communication to enter the network, a stateless firewall must have all ports manually configured to be "unblocked." The other options are not legitimate firewall types. A stateful firewall dynamically opens and closes allowed ports as requested by applications; for example, a client sending a request to a webserver on port HTTP TCP 80 and receiving a response on a different port is allowed by a stateful firewall, whereas it is dropped by a stateless firewall.

8. Answer: C Explanation: A packet broker receives network traffic, analyzes it, and manages the traffic flow. It can act as a filter while also monitoring the network and giving administrators a unified view of network traffic. Because they do not provide all of these functions, the other answers are incorrect.

9. Answer: D Explanation: A forward proxy is intended to be used on the client side of a communication. A tunneling proxy is intended to serve as a bridge between two networks. A reverse proxy server is one that is set up on the server. No such thing as a range proxy exists.

10. Answer: B,C Explanation: FTP, Telnet, POP3, IMAP, SNMP v1 and v2, RC2, RC4, MD5, 3DES, and DES are a few examples of commonly used weak protocols and ciphers.

11. Answer: C,D Explanation: DNS data is typically transmitted across the network in plaintext. This poses a security risk because the user's privacy may be jeopardized. It is also more vulnerable to a man-in-the-middle attack, in which a rogue DNS server replaces the DNS server's results. DNS over HTTPS (DoH) and DNS over TLS are more secure options (DoT). Both methods encrypt network traffic in the end.

© Secbay Press

130 | Page

Secbay Press 12. Answer: C,D Explanation: A bad actor can use DNS spoofing or DNS cache poisoning to use a fake DNS server to provide incorrect data when a query is performed. The other options are invalid terms.

13. Answer: C Explanation: A stable release is one that is ready for use in a production environment. A canary release distributes new features to a select group of testers to determine whether the new features have any negative impact on the software. An LTS build is one that should be supported for a longer period of time than usual. Prior to the release of a software build, earlier releases are referred to as beta builds.

14. Answer: C Explanation: LDAP is a protocol used by directory services, which are commonly found on Linuxbased systems, to store enterprise-wide data such as user account data. The other options are not legitimate directory services.

15. Answer: B,D Explanation: SAML1.1, SAML2, OAuth2, OpenID Connect, WS-Trust, and WS-Federation are all protocols associated with federation identity management.

16. Answer: A,C Explanation: A role is a method of granting access to cloud resources to user accounts. A permission can also be linked to a user account in order to grant access to cloud resources. The other terms have nothing to do with cloud resources.

17. Answer: A,C Explanation: In order for an account to be created, specific procedures must be followed. When a person no longer requires an account—for example, when the person leaves the organization—a procedure for deactivating or deleting the account must be in place. This is dealt with during the deactivation stage. While auditing an account and logging account activity are important tasks, they do not fall under the purview of the lifecycle management process.

18. Answer: B Explanation: IPsec operates on the OSI model's layer 3 (the Network Layer). © Secbay Press

131 | Page

Secbay Press 19. Answer: A Explanation: Cisco created another protocol known as generic routing encapsulation (GRE), which is frequently used to create encrypted VPN connections across networks.

20. Answer: C Explanation: L2TP makes use of 256-bit encryption keys.

21. Answer: A,C Explanation: VxLANs were created to address scaling concerns in large-scale deployments (particularly cloud environments and ISPs). VxLANs are similar to VLANs, but there are some differences, such as more network segments available (up to 16 million). Each network is limited to 4000 VLANs. The remaining options are not valid network segmentation acronyms.

22. Answer: B,C Explanation: Three popular network segmentation techniques are VLAN, VXLAN, and GENEVE. The other answers are not valid network segmentation acronyms.

23. Answer: A Explanation: A host-based intrusion detection system (HIDS) is installed on specific systems and monitors the state of the system to determine if an intrusion has occurred. Many businesses make use of EDR software. This software monitors and logs data from endpoints (individual systems) into a centralized database. A HIPS (host-based intrusion protection system) both monitors for intrusions and can respond if one is detected. An antivirus program searches a system for viruses and other malware.

24. Answer: A Explanation: Prior to the release of a software build, earlier releases are referred to as beta builds. An LTS build is one that will be supported for a longer period of time than usual. A canary release distributes new features to a select group of testers to determine whether the new features have any negative impact on the software. A stable release is one that is ready for use in a production environment.

© Secbay Press

132 | Page

Secbay Press 25. Answer: C Explanation: A user must have the appropriate permissions to access cloud resources. These permissions can often be applied to specific resources or types of cloud resources. For example, you could grant a user the ability to read data from any database while also granting the user the ability to modify specific databases.

© Secbay Press

133 | Page

Secbay Press

DOMAIN - 3 Deployment 3.1 Incorporate elements into a cloud-based solution based on a given situation 3.1.1 Subscription services •

Cloud Subscriptions – payment structure for Cloud Services



Commit to Monthly or Annual Basic- often more discounts for longer commitment.

Customers usually enter monthly or yearly service commitments. Subscription-based pricing is becoming more prevalent in cloud computing. Under this model, cloud users typically pay in advance before gaining access to cloud services. For those who consume substantial cloud resources, a subscription-based approach can be advantageous. However, for users with minimal computing needs, this pricing model may not be the most suitable option. Certain cloud providers offer subscription-based plans that can adapt to match actual usage. Details of subscription pricing are outlined in the provider's service-level agreement (SLA).

File subscriptions File subscriptions represent a feature of a service that enables both your employees and clients to store files (data) in the cloud. This service falls under the categories of Software as a Service (SaaS) and Storage as a Service (STaaS). File subscriptions are a widely used cloud service, offering a multitude of options. Prominent examples include Google Drive, Microsoft OneDrive, and Dropbox, among others.

Communications While many people use cell phones to make and receive phone calls, conventional phone calls were made using land-line devices and sent via the public switched telephone network (PSTN). This technology, which dates to the 1800s and is still widely used today, employs copper wires. However, this mode of communication has limits (analog voice only) and necessitates a massive infrastructure that does not exist everywhere in the world.

Email You have almost certainly utilized a cloud-based email service. At the time of writing, there were around 4 billion email subscribers worldwide, with over 1.8 billion of them using Gmail, one of numerous cloudbased email services. That means that roughly 45 percent of all email users worldwide utilize one of the

© Secbay Press

134 | Page

Secbay Press numerous cloud-based email providers. Outlook, Zoho Mail, and Yahoo! Mail are some more popular cloud-based email providers.

Voice over IP (VoIP) A more current method is to employ a communication option known as voice over IP (VoIP), which is provided by businesses such as cloud suppliers. Phone calls are made using VoIP over an Internet broadband connection. This link necessitates the conversion of sound into a digital signal. VoIP service is provided by cloud companies such as RingCentral, Vonage, and Microsoft 365 Business Voice. This service is classified as SaaS; however, it is also classified as Communication as a Service (CaaS).

Messaging Collaboration services provide numerous functionalities that empower organizations to work together effectively on problem-solving, product development, and various business-related activities. Many of these applications incorporate messaging features for communication with individuals or groups. Additionally, these services may encompass video conferencing, forums, file sharing capabilities, and project management tools. Cloud service models are characterized by the phrase as a service and are accessed by many types of devices, including web browsers, thin clients, and mobile devices. There are three primary service types. Software as a Service, Infrastructure as a Service, and Platform as a Service are the core service offerings. Many cloud service providers offer more descriptive terms in their marketing and sales offerings, including Communications as a Service, Anything as a Service, Desktop as a Service, and Business Process as a Service. However, all of these newer terms fit into either the SaaS, IaaS, or PaaS service model.

Know the primary cloud delivery models. The four primary cloud delivery models are public, private, community, and hybrid clouds. Know what each one is and its function. It is critical that you understand the way cloud services are delivered in the market today and what they offer.

Collaboration •

Better collaboration remotely due to documents being more accessible.



Cuts down on email traffic and enables you to have document repository.



Data can be off premises and outside the firewall, accessible anywhere there's Internet connection.

Cloud collaboration facilitates concurrent work on documents stored in the cloud, granting access from anywhere with an internet connection. The process typically commences with one user creating a file or document and subsequently sharing access with other team members. Cloud collaboration, categorized under enterprise collaboration, empowers employees to collaborate on documents and various data types, all stored off-premises and beyond the company's firewall. Workers utilize a cloud-based collaboration platform to collaborate, edit, and engage in joint projects. This approach allows two or more individuals to work on a project simultaneously. © Secbay Press

135 | Page

Secbay Press A cloud collaboration project begins when one user creates the file or document and then gives access to certain individuals; for example, the project creator may share a link to the project that allows others to view and edit it. Users can make changes to the document at any time, including when employees are viewing and working simultaneously. All changes are saved and synced so every user sees the same version of the project. Cloud collaboration has become more sophisticated as vendors try to attract customers. It is common for a collaboration tool to include a way for project participants to see who else is viewing the document and to communicate with each other while working, such as through comments or a chat function. Tools also often allow users to set up email alerts for when a file is changed. These and other functions help increase worker efficiency and productivity.

Virtual desktop infrastructure (VDI) •

VDI- Virtual Desktop Infrastructure



Hosting Desktop environments on central server



Persistent vs Non-Persistent VDI

Virtual desktop infrastructure (VDI) entails the hosting of desktop environments on a central server, representing a variant of desktop virtualization. Here, specific desktop images operate within virtual machines (VMs) and are transmitted to end clients via a network. The virtual desktops reside within VMs on a centralized server, with each virtual desktop featuring an operating system image, commonly Microsoft Windows. These VMs are host-based, enabling multiple instances of them to coexist on a single server within the data center. For continual access to the virtualized desktops, end clients must maintain a constant connection to the centrally managed server. The VDI implementation employs a connection broker to locate a virtual desktop within the resource pool for each client to access upon successfully entering the VDI environment. Simultaneously, a hypervisor assumes responsibility for generating, running, and managing the various host machine VMs housing the individual virtual desktop environments.

© Secbay Press

136 | Page

Secbay Press

Figure 27 Virtual Desktop

Directory and identity services •

Directory Services- AD DS, ADFS



Active Directory- can be cloud native or build on top existing AD



Group Policy allows for rapid standardization

AWS Directory Service for Microsoft Active Directory, also referred to as AWS Managed Microsoft Active Directory (AD), empowers your directory-dependent workloads and AWS resources to utilize a managed Active Directory (AD) within AWS. AWS Managed Microsoft AD is constructed upon authentic Microsoft AD and eliminates the necessity for you to synchronize or replicate data from your existing Active Directory to the cloud. You can employ standard AD administrative tools and leverage the inherent AD functionalities, including Group Policy and single sign-on. With AWS Managed Microsoft AD, it's effortless to incorporate Amazon EC2 and Amazon RDS for SQL Server instances into your domain and make use of AWS End User Computing (EUC) services, such as Amazon WorkSpaces, in conjunction with AD users and groups..

Cloud resources A cloud resource is a wide phrase that refers to any type of cloud system. A resource could be anything from a virtual computer to an email service, a firewall, or a database. Take this into consideration: when something resides in the cloud, it is categorized as a cloud resource. IaaS A cloud provider offers the foundational infrastructure for deploying your software solution in an IaaS environment. In this scenario, the cloud provider furnishes the hardware components, such as computing power, networking, and storage, while you retain control over other aspects, including the operating © Secbay Press

137 | Page

Secbay Press system and software. Instances of IaaS services encompass offerings like Azure virtual machines and AWS EC2 instances. The pivotal benefit of an IaaS software solution is the level of control it affords. You have the freedom to select the platform, allocate hardware resources, and configure the system. However, this control can also be seen as a drawback, as it entails the responsibility of maintaining the operating system and software. PaaS In a PaaS (Platform as a Service) solution, the cloud provider delivers a platform that enables you to either install or develop software. Open Shift, AWS Elastic Beanstalk, and Google App Engine are all examples of PaaS. The fundamental benefit of using a PaaS solution is that you may deploy a bespoke software solution without having to manage the platform on which the software operates. With a PaaS solution, like with a SaaS solution, you may still have worries about data control and potential vendor lock-in. However, because PaaS provides greater control over these concerns, the fear isn't as high as it is with the SaaS option. SaaS In a SaaS (Software as a Service) software solution, the cloud provider hosts and manages the entire application. Examples of SaaS include Salesforce, Dropbox, Gmail, Webex, and DocuSign. One benefit of SaaS systems is that the provider takes care of all software maintenance. However, drawbacks include limited control over your data, the inability to customize the software to your organization's requirements, and the potential for vendor lock-in, making it challenging to switch to an alternative solution).

3.1.2 Provisioning resources When you, as a cloud customer/consumer, create a resource in the environment of a cloud provider, this is referred to as provisioning a resource. The cloud provides a wide range of resource kinds of Compute resources, storage resources, network resources, and applications are examples of what they can be. Resources can be provisioned manually or automatically as needed. The remainder of this chapter will concentrate on the various sorts of cloud resources that can be provisioned as well as how they can be provisioned.

Compute A cloud compute resource is any resource whose primary purpose is to conduct any type of calculation operation. This usually refers to virtual machines and containers, both of which are discussed in further depth later in this chapter. It should be noted that cloud computing is a more general phrase that refers to the utilization of cloud resources to accomplish tasks. In other words, it's important to note that the terms "cloud compute resource" and "cloud computing" should not be used interchangeably.

© Secbay Press

138 | Page

Secbay Press Storage Storage resource is anything that is utilized to store data in the cloud. There are three kinds of cloud storage resources: Block storage: This sort of storage resource is generally used for compute resources (virtual machines in particular) to store data, including the computer resource's operating system. Consider a block storage resource, such as a hard disk or a partition, if you are used to non-cloud storage devices. AWS Elastic Block Storage (EBS), Microsoft Azure Blob Storage, and Google Cloud Persistent Disks are a few examples. Block storage is often raw storage space that requires a file system to be installed on the storage resource to be used. This is common during the virtual machine installation procedure. File storage: This storage resource functions similarly to a network file system (also referred to as a network-attached storage, or NAS). When you need to share files with multiple resources, this form of storage is used. For example, two virtual computers may require access to the same files (or share files between the two VMs). AWS Elastic File System (EFS), Azure Files, and Google Cloud File store are all examples of file storage. Object storage: Object storage is a feature that allows objects (unstructured data such as emails, videos, pictures, text, or any other type of data) to be saved in the cloud. Rather of using typical file system storage characteristics, object storage organizes data into "groups" (similar to a folder in a file system). Data is often accessed using a URL, similar to how you would access a web page. Object storage is long-lasting and highly available, enables encryption, and may be used in a variety of ways to support various backup and archiving functionalities. AWS Simple Storage Service (S3), Google Cloud Storage, and IBM Cloud Object Storage are a few examples.

Network Cloud resources, like systems in your own IT infrastructure, require network settings to interact between the resource and systems on the Internet. However, network cloud provisioning differs from what you may be used to in your on-premises IT architecture. The network in a typical on-premises IT architecture consists of physical network connections as well as specific network configurations on each machine within the network. The physical network is already in place in a cloud environment, having been deployed by the cloud provider. However, because numerous companies may share the same physical network, you must establish your own private network within that physical network (such as a VPC in AWS and GCP, and a VNet in Azure).

3.1.3 Application Applications are another frequent type of cloud resource. Software that is executing in the cloud is referred to as an application resource. Because different scenarios may necessitate more hardware resources (RAM, storage, and so on), the cloud is a good site to host programs because resources may be allocated on the fly.

Server less Server less applications are event-driven cloud-based systems where application development relies solely on a combination of third-party services, client-side logic and cloud-hosted remote © Secbay Press

139 | Page

Secbay Press procedure calls (Functions as a Service). "Serverless" is a cloud computing execution model in which the cloud provider dynamically handles server allocation and provisioning. Serverless applications run in stateless compute containers triggered by events, are ephemeral (often lasting for just one invocation) and are fully managed by the cloud provider. Pricing is determined by the number of executions rather than pre-purchased compute capacity. So, if you've been planning that project for a while, it might be the ideal framework for it. Go ahead and give it a try!

3.1.4 Deploying virtual machines (VMs) and custom images •

VIRTUAL Machines- VMs, Amazon EC2, Google Compute



Custom Images – once you download and customize



OS Templates – publicly available images



Solution Templates- marketplace based solution The VM consists of the following components: A virtual machine monitor (VMM) The host operating system Shared hardware

Most cloud companies supply prebuilt images to simplify the process of deploying a virtual machine as simple as possible. The operating system in its raw form is represented by an image. When you use an image to launch a virtual machine, the resulting active operating system is known as an instance. In rare scenarios, the cloud vendor's prebuilt instances may not meet your exact requirements. In some circumstances, you can use the cloud vendor's capabilities to create your own bespoke image. Usually, this involves utilizing an existing image, setting up the virtual machine, tailoring the operating system (which includes adding customized applications), and then transforming the virtual machine into an image that can be employed at a later time.. Images Utilize operating system images to generate boot disks for your instances. You can opt for one of the following image types: 1. Public Images: These are supplied and managed by Google, open-source communities, and third-party vendors. By default, all Google Cloud projects have access to these images and can utilize them for creating instances. 2. Custom Images: These are exclusively available to your Cloud project. You can fashion a custom image from boot disks and other images, and then employ the custom image for generating an instance. Most public images can be used without any extra charges. However, there are a few premium images that may incur additional costs for your instances. Importing custom images into Compute Engine doesn't impose any extra cost on your instances but does involve an image storage charge while the custom image is retained in your project.

© Secbay Press

140 | Page

Secbay Press

3.1.5 Templates When setting up a virtual machine or other cloud resources like a database, there are typically numerous questions that need to be addressed during installation. Templates come into play as files designed to provide automated responses to these questions. Cloud templates are often designed in a user-friendly data model or format. Most commonly, these templates adhere to industry-standard data formats such as JSON, YAML, and XML.

OS templates A virtual computer is deployed using an OS template. The questions you are asked may differ. For example, in AWS, you might be asked: ⮚ The operating system's image type. ⮚ The instance type, which is AWS's way of specifying the hardware characteristics of the virtual machine (including details like RAM, virtual CPUs, and more). ⮚ Additional instance details, such as the number of instances and the virtual network to which the instance(s) should be assigned. ⮚ The tags are attached to the instance. In AWS, tags are used to categorize instances based on identifiers that you create. ⮚ The security group with which the instance is affiliated. The security group in AWS serves as a virtual firewall.

Solution templates A solution template is intended for the cloud deployment of a non-OS resource. This is a broad field that includes resources such as databases, web servers, and virtual networks.

3.1.6 Identity management •

Strength of Identity- i.e., certainty of validating the correct user



Federated identity management systems widely used with SaaS services



Many IaaS providers and SaaS providers for Identity Management

Important points to consider are: ●

● ● ● ● ●

The strength of identity that can be verified/validated helps in risk calculation when interacting with the persona. The attributes of a persona, as with identity, have a strength by which an attribute can be verified/validated. Identity and attributes should be consumed from multiple sources. There will be instances when a transient identity is sufficient. There will be instances in which pseudo-anonymity is desirable (such as voting). Some general uses of federation: Widely used with SaaS services

© Secbay Press

141 | Page

Secbay Press ● ● ●

PaaS depends a lot on the provider Many companies design their own federated service There are many IaaS providers who can be leveraged

3.1.7 Containers What is a container? Docker Engine Kubernetes Configure Variables/Secrets/Storage A container serves as a standardized software unit that bundles together code and its dependencies, enabling the application to operate consistently and efficiently across various computing environments. In the context of Docker, a container image is a compact, self-sufficient package of software that contains everything essential for running an application: code, runtime components, system tools, libraries, and configurations. Container images are transformed into active containers during runtime, and in the case of Docker containers, these images are instantiated when they are executed on Docker Engine. Whether deployed on Linux or Windows-based systems, containerized software maintains consistent behavior, unaffected by variations in the underlying infrastructure. Containers provide a protective boundary around software, guaranteeing consistent functionality even when transitioning from one environment, such as development, to another, like staging. Containers that operate on Docker Engine possess the following attributes:

1. Standard: Docker has established the industry benchmark for containers, ensuring their portability across diverse environments. 2. Lightweight: Containers leverage the host machine's OS kernel, eliminating the need for a separate OS for each application. This enhances server efficiency and leads to reduced server and licensing expenses. 3. Secure: Applications within containers benefit from enhanced security, with Docker offering robust default isolation features, setting a high industry standard for safeguarding applications. Configure variables When you utilize a template, you can alter it by using variables. When delivering a resource using a template, you use a variable to provide options. Persistent storage Remember from the "Storage" section earlier in this chapter that there are three types of cloud storage: block, file, and object storage. When deploying a virtual machine, the operating system is stored on a block storage resource. This storage device is persistent, which means that even if you turn off the virtual © Secbay Press

142 | Page

Secbay Press computer, the data remains on the storage device. For additional storage resources, most cloud companies offer non persistent storage. AWS, for example, has a feature known as instance storage. If the virtual machine is functioning, this block storage resource is available. The instance store is erased when the virtual machine is shut off, making this a nonpersistent storage solution.

3.1.8 Auto-scaling •

Automatic scaling – add more resources to pool based on load



Can be based on health checks, CPUs, etc.

Auto scaling, sometimes spelled as auto-scaling, and occasionally referred to as automatic scaling, is a cloud computing technique that adjusts the quantity of computational resources within a server farm. This adjustment is typically measured by the number of active servers and is done automatically in response to changes in the farm's workload. It’s easy to get started with AWS Auto Scaling using the AWS Management Console, Command Line Interface (CLI), or SDK. AWS Auto Scaling is available at no additional charge. You are charged solely for the AWS resources necessary to operate your applications, as well as any Amazon CloudWatch monitoring expenses.

3.1.9 Post-deployment validation •

Deployment fails to create resources



Failed Software Installation



Firewall rules configured incorrectly

If a deployment fails, check your instances or deployments to identify the problem. Go to the Deployment Manager to see detailed messages that indicate what caused the deployment to fail. A deployment could encounter difficulties in achieving successful completion due to any of the following reasons: •

The deployment manager encounters issues while generating essential resources, such as an instance, disk, or load-balancing forwarding rule.



The software installation and configuration process encounters errors.



The deployment fails to configure the firewall rules correctly within the Compute Engine network.

© Secbay Press

143 | Page

Secbay Press

3.2 Given a scenario, provision storage in cloud environments. 3.2.1 Types Block •

SAN- Storage Area Network-high speed redundant dedicated to storage traffic



LUN- Logical Unit Number unique value to represent remote storage systems like hard disk or an array of storage devices



HBAs- Host Bus Adapters- interface card that connects to SAN



Zoning – set up to match LUN that servers are restricted to specific storage devices

Storage is a core infrastructure component found in any cloud datacenter. To get started, let’s investigate common storage systems and how they attach and are accessed both in the cloud and locally. After we review the basics of storage, the concepts of storage area networking will come into view as we look at SANs and the technologies that make up storage area networks that are found in every large datacenter. Large computing environments, storage systems are enclosed in their own racks in a datacenter and attached to dedicated networks that are separate from the Ethernet LANs we are familiar with. A storage area network (SAN) is an extremely fast, exceptionally redundant, and exclusively designated for storage traffic. When a server accesses storage over a SAN network, the SAN must be completely lossless and highly available. The most common dedicated store networking technology is Fiber Channel. Typically, you configure a SAN_A and a SAN_B network for redundancy, and the two networks operate in parallel to each other and are not interconnected. This architecture offers two paths from the server to the storage arrays, with a primary network and a standby ready to take over should the primary fail. Block-Level Access Block-level SAN storage is often found in larger computing environments such as in the cloud. With blocklevel access, a server reads and writes raw data directly to a hard disk. When a SAN is used, block data is transported across the network to the host bus adapter on the server. Protocols such as Fiber Channel, Fiber Channel over Ethernet (FCoE), or IP-based SCSI networks (commonly known as iSCSI) are used to make the block storage visible to the server. Shared Storage SANs are networked storage and are shared by many devices accessing the data on the drive arrays inside a storage system. By using various storage networking protocols and access arrangements the stored data can be security protected and still be shared in the cloud even when multiple clients are accessing the same storage hardware. HBAs The host bus adapter (HBA) is an interface card that is installed inside a server that connects to the SAN. With a LAN Ethernet network, a NIC is installed to provide the connection to the Ethernet network from the server; an HBA provides a similar connection function for the SAN. When the HBA is configured, the server’s operating system accesses the HBA for storage requests. It may not even be aware that there © Secbay Press

144 | Page

Secbay Press is no storage attached to the platform and that all the storage is across the SAN in large storage arrays in the cloud datacenter. LUN Masking In a SAN, a logical unit number (LUN) is a unique value used to specify remote storage systems such as a hard disk or an array of storage devices. The remote storage devices are block-level arrays of disks. A LUN mask is employed to partition a SAN port, allowing the storage controller's SAN port to control a server's access to a specific storage area by confining it to designated LUNs. LUN masking is the process of making storage LUNs available for access by some host computers and restricted to others in the SAN. LUN masking is normally configured on the server’s HBA or on the SAN switch and defines the LUN of the remote storage system. LUN masking can be thought of as a filter to permit or restrict storage visibility between the servers and the storage controllers.

Zoning On a SAN, zoning is set up to match a LUN in a way that servers on the SAN are restricted to specific storage devices. In zoning, storage devices are known as logical units. A server can access storage resources only inside the zone in which it is configured. Grouping servers into zones enhances storage security. For example, in a datacenter all servers that access a secure area can be assigned to the same zone that allows access. All other servers can be restricted from accessing the storage area that are not members of the zone that allows access. Zoning can isolate or restrict a single server to a specific LUN or many servers to a single LUN. On a SAN, when servers request a storage resource, they are commonly referred to as the initiators, and the storage devices are called the targets. When zoning is configured, it is a mapping of which initiators can access what targets. The SAN switches or fabrics are where the zoning configurations are applied. Storage area network (SAN) – Zoning

File Network attached storage (NAS) •

NAS- Network Attached Storage file level access to data across a network rather than block in the SAN.



Data is not stored across a local computer, but over a LAN.



Files sent over TCP/IP

Network-Attached Storage (NAS) Network-attached storage (NAS) is file-level access to data across a network. For example, a file server sitting on an Ethernet-based LAN and hosting shared directories would be a type of NAS. In

© Secbay Press

145 | Page

Secbay Press a NAS configuration, files are sent over the network rather than blocks of data as in storage area networks (SANs). The data is not stored on a local computer, as with direct-attached storage, but over a LAN as network-attached storage. In a NAS arrangement, files are sent over the LAN using a protocol such as TCP/IP. The data structures are dependent on the operating system used. For example, with Microsoft Windows, the Common Internet File System (CIFS) protocol is used. With Linux, Network File System (NFS) is one of several file systems commonly implemented. File-level access is a common solution for sharing files between servers of users on the network.

Object Tenants A single-tenant solution is one in which a resource or infrastructure serves only one consumer. This is the most common form of renter in small to medium-sized businesses. All the organizations share resources, but none of the resources outside of the organization are shared. A multitenant solution is one in which a resource or infrastructure serves numerous clients. These customers could be company units inside a larger firm or distinct entities. The main benefits of a single-tenant solution are that it is more secure and gives the organization more control over the cloud environment. However, a multitenant solution is often more cost effective (because to volume discounts) and allows for greater flexibility in integrating multiple business units or organizations. Buckets Files are grouped into folders when using a file-based storage solution (sometimes called directories). Buckets are the term used to organize objects in object-based storage solutions. In other words, a bucket is the container in which your object data is "stored." Object level storage tenants Object is a computer data storage architecture that manages data as objects, as opposed to other storage architectures like file systems which manage data as a file hierarchy, and block storage which manages data as blocks within sectors and tracks. Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage can be integrated at various tiers, including the device level (object-storage device), the system level, and the interface level. In each instance, the objective of object storage is to provide functionalities that are not covered by other storage frameworks, such as interfaces that can be directly customized by applications, a namespace that can encompass numerous physical hardware instances, and data-management features such as object-level data replication and distribution. Object storage systems enable the storage of vast quantities of unstructured data. Object storage finds applications in scenarios like storing images on Facebook, music files on Spotify, or documents within online collaboration platforms like Dropbox. © Secbay Press

146 | Page

Secbay Press One of the design principles of object storage is to abstract some of the lower layers of storage away from the administrators and applications. Thus, data is exposed and managed as objects instead of files or blocks. Objects come with additional descriptive properties that enhance indexing and management. Administrators are relieved from performing lower-level storage tasks such as constructing and managing logical volumes or setting RAID levels for disk failure management. Object storage goes beyond addressing and identifying individual objects solely by file name and path. It introduces a unique identifier within a bucket or across the entire system, enabling much larger namespaces and eliminating name conflicts. In object-based storage, a tenant account allows clients using protocols like Simple Storage Service (S3) or Swift to store and retrieve objects from a Storage GRID Web-scale system. Each tenant account supports a single protocol, determined by the grid administrator during account creation.

3.2.2 Tiers Flash – sometimes called tier 0 storage. Any storage that is stored on a solid-state drive is referred to as flash storage (SSD). These hardware devices are noted for being speedier than other types of storage, such as spinning disks. This increased speed, however, comes at a steeper cost. Hybrid –mixing flash and HDDs A hybrid tier is one in which you use your own on-premises storage devices but supplement the available capacity with cloud-based storage if they get full. Because you may buy only what you think you need, this method reduces the amount of money you have to spend on storage devices. If you make a mistake or there is an unexpected surge in demand for storage space, cloud-based storage is accessible at a "pay only for what you use" fee. Spinning Disks- many applications will stay on this although critical workloads will run on flash. Spinning disks, often known as magnetic drives, are classic hard drives that store data on fast spinning platters coated with a magnetic substance. Spinning disks are slower than SSD drives, but they are often more cost effective when utilized in a cloud solution since spinning disks are less expensive for the cloud vendor to purchase. It's worth noting that spinning disks are also referred to as hard disk drives (HDDs). Long-Term- data stored for archives, compliance purpose, and lower cost/lower speed of access. Tape storage or cold storage are two common types of long-term storage. You may have certain data that you don't need on a daily basis, but you still need to keep it safe in case your firm is subjected to an audit or security investigation. Most cloud vendors offer a long-term storage solution that is far less expensive than employing SSD or spinning disks. However, access to this data will most likely be sluggish, and it may also require advance notification before the data is made public. Since the data stored in the cloud or datacenter may have different storage requirements, we should discuss the concept of tiered storage. Data can have different requirements, such as how critical it is. Certain applications may need high read or write speeds for better performance. Some data may need © Secbay Press

147 | Page

Secbay Press to be accessed very frequently, whereas other data can be stored and seldom accessed by an application. Different storage tiers can be defined and assigned to best meet the levels of storage the cloud customer may require. Tier 1 storage is designated for the most critical or frequently accessed data and typically resides on the fastest, most redundant, or highest-quality media in the storage hierarchy. We will delve into storage arrays shortly, but for now, it's essential to emphasize that with Tier 1 storage, employing a configuration that tolerates one or more disk failures without data loss and ensures uninterrupted access is of paramount importance. Tier 1 storage arrays are equipped with the highest levels of manageability and monitoring capabilities, designed to offer utmost reliability. Tier 2 storage is a step below Tier 1 and is typically reserved for data with less demanding read or write performance needs or data that is accessed infrequently. Tier 2 data can leverage more cost-effective storage devices and may even be accessible over a remote storage network. Instances of Tier 2 data include email storage, file sharing, or web servers where performance remains significant, but more budget-friendly solutions can be applied. Data that is often at rest and rarely accessed, or backups of Tier 1 and Tier 2 data, can be stored at the Tier 3 level. Examples of Tier 3 media are DVD, tape, or other less expensive media types. There can be many different tiers in a storage design in the cloud. More than three tiers can be implemented based on the complexity and the requirements of the storage systems’ performance. As we discussed in our examination of storage tiers, the higher the tier, the more critical the data is and the more the design focuses on faster performance, redundancy, and availability. For example, a Tier 1 data set would generally offer better performance than a Tier 2, which in turn would have higher performance than Tier 3. This is by design; for example, a Tier 3 or higher may only be accessed infrequently and have no need to offer the performance of a Tier 1 data set. By properly classifying the data needs in a multitier cloud storage system, you can realize cost savings by not paying for more than what you require in your cloud design.

3.2.3 Input/output operations per second (IOPS) and read/write The input/output operations per second, or IOPS, are the primary speed calculation for a storage device. This is a value that you must consider when deciding the underlying storage type to employ for your storage resource. It is worth noting that the durability and volume size numbers are the same. The Max IOPS per Volume is what distinguishes these HDD kinds (which, in turn, affects the Max Throughput per Volume). Because of its faster performance, the Throughput Optimized HDD is better suited for scenarios where significant amounts of data must be transported (big data, data warehouses, log processing, and so on).

3.2.4 Protocols With storage devices and resources, a variety of protocols can be used. Some of these protocols, such as NFS, CIFS, and iSCSI, are intended to allow you to access storage from anywhere on the network. Others, such as FC and NVMe-oF, are intended to enable fast access to SSDs. This section will teach you the fundamentals of each of these protocols.

© Secbay Press

148 | Page

Secbay Press Network file system (NFS) Since 1984, Network File System has been utilized as a distributed file system protocol. On UNIX and Linux computers, it is a widely popular method of sharing file systems across the network. NFS operates by sharing a local file system (or a portion of a file system) via a network of NFS server daemons (a daemon is a program that normally functions without needing to interact with or be controlled by a human). A system administrator on the NFS client system utilizes a method known as mounting to make the NFS shared file system accessible via a directory in the local file system. NFS has undergone numerous changes as an older software protocol. NFSv4, the most recent protocol, was last updated in 2015. NFS has certain drawbacks when compared to CIFS, even though it is still quite popular and has recently received numerous contemporary capabilities. For example, tasks such as sharing an NFS resource and accessing the NFS resource on the client side necessitate administrative privileges. Regular users can usually share a file system resource via CIFS.

Figure 28 Network file system (NFS)

Common Internet file system (CIFS) Microsoft's Common Internet File System protocol is a distributed file system protocol. A user or administrator can use this protocol to share a portion of a file system. Regular users can also gain access to the shares by executing activities such as mounting network drives (after giving authentication credentials and provided they have the appropriate permissions). Because software programs on Windows, Linux, Mac OS, and UNIX can share and access file systems, this form of distributed file system is widespread. As a result, the protocol is more adaptable. Samba software is used to share and access shares on Linux and UNIX systems.

Internet small computer system interface (iSCSI) The Internet Small Computer System Interface is a protocol that facilitates communication between devices on a network. This protocol allows you to share a block device (SSD disk, HDD drive, SAN device, etc.) over the network for usage by a client system. The block device can then be formatted with a local file system and utilized in the same way that a local file system would be (mounted on a Linux or © Secbay Press

149 | Page

Secbay Press UNIX system or assigned a drive letter on a Microsoft Windows system). There are few crucial terminologies for iSCSI that you should be aware of. The device being shared is known as the iSCSI target, and the system accessing the share is known as the iSCSI initiator. You should also be aware that iSCSI can be implemented over Fibre Channel.

Fiber Channel (FC) Fiber Channel is a protocol for high-speed data transport between storage devices. It connects systems to SANs, iSCSI, and local data storage. FC is a robust and adaptable protocol, so much so that entire books have been written about it. For the CompTIA Cloud+ test, You should be familiar with the following FC fundamentals: ● ● ● ●

Optical fiber cables are commonly employed as data transfer media because of their high capacity. However, the protocol can also be used with copper cabling. When iSCSI is implemented using FC, the protocol utilized is known as Fibre Channel Protocol (FCP). FC can also be used in conjunction with another protocol called Non-Volatile Memory Express (NVMe). This variant is known as NVMe-oF. You may be asked an exam question about FC data rates (speed). The following data speeds are currently supported: 1, 2, 4, 8, 16, 32, 64, and 128 gigabits per second.

Non-volatile memory express over fabrics (NVMe-oF) When SSD devices first hit the market, they were quicker than spinning disks, but they were attached to the motherboard, which used older data transfer protocols (SAS and SATA). This wasn't a problem at first, but as SSD devices became faster, these older protocols became more frequent bottlenecks. NVMe was developed in response to the demand for a quicker transport protocol. What makes NVMe quicker than earlier protocols? In a nutshell, it adds more "lanes" of input/output (I/O), resulting in better total throughput. This protocol was excellent for local SSD devices, but there was a need for a quicker standard for connecting with network devices. NVMe-oF, or Non-Volatile Memory Express over Fabrics, is the result of this. For example, with Microsoft Windows, the Common Internet File System (CIFS) protocol is used. With Linux, Network File System (NFS) is one of several file systems commonly implemented. File-level access is a common solution for sharing files between servers of users on the network. When a SAN is used, block data is transported across the network to the host bus adapter on the server. Protocols such as Fiber Channel, Fiber Channel over Ethernet (FCoE), or IP-based SCSI networks (commonly known as iSCSI) are used to make the block storage visible to the server. Now that you understand block- and file-level storage as well as local and networked storage, let’s move on to exploring the communication protocols that are commonly found when connecting a SAN storage system to the servers on the SAN network accessing the data. The Small Computer Storage Interface (SCSI) protocol offers a standard set of signaling methods that allow for communications between the storage initiator and the storage provider or target. One of the © Secbay Press

150 | Page

Secbay Press limitations of SCSI is that it is a relatively short distance from the host to the storage drives due to cable restrictions. To resolve this issue, the Fibre Channel protocol was developed. FCOE With Fiber Channel over Ethernet (FCoE), storage data can be carried over the same network as your Ethernet LAN traffic. This is often referred to as being a converged fabric, When the switch fabrics of Ethernet and Fiber Channel are combined, many advantages can be realized. For example, this approach reduces the cabling and complexity of running multiple data and storage networks inside the cloud. The Fiber Channel traffic is encapsulated again, this time with an Ethernet header, and connected to the LAN along with all other networked devices in the cloud datacenter. There are special considerations to make sure that no storage traffic is dropped in the LAN Fiber Channel is one of the predominant storage protocols in storage area networking. Fiber Channel encapsulates inside its headers the SCSI command set to allow the server and storage controllers to exchange information on disk read, write, and other types of operations between the host operating systems and the remote storage. Just as Ethernet networks have hardware LAN switches that interconnect all devices, SAN switches serve the same function for storage devices. Instead of using the Ethernet protocol in a SAN, the Fiber Channel protocol is used. The frame formats and encapsulation are similar to Ethernet but differ in purpose and operation; the Fiber Channel and Ethernet frame formats are not compatible and will not communicate with each other. NVMe over Fabrics (NVMe-oF) NVMe over Fabrics, also known as NVMe-oF and non-volatile memory express over fabrics, is a protocol specification designed to connect hosts to storage across a network fabric using the NVMe protocol. The protocol is intended to facilitate data transfers between a host computer and a target solid-state storage device or system using NVMe message-based commands. Data transfers can be conducted through various methods, including Ethernet, Fiber Channel (FC), or Infinite Band. NVM Express Inc. is the nonprofit organization that published version 1.0 of the NVMe specification on March 1, 2011. Later, on June 5, 2016, the same organization published version 1.0 of the NVMe-oF specification. NVMe version 1.3 was then released in May 2017. This update added features to enhance security, resource sharing and solid-state drive (SSD) endurance. The NVM Express organization estimated that 90% of the NVMe-oF protocol is the same as the NVMe protocol, which is designed for local use over a computer's Peripheral Component Interconnect Express (PCIe) bus. Vendors are actively engaged in building a robust enterprise ecosystem that encompasses the entire NVMe over Fabrics infrastructure. This ecosystem includes the server operating system, server hypervisor, network adapter cards, storage OS, and storage drives. Furthermore, storage area network (SAN) switch providers, including companies such as Cisco Systems Inc. and Mellanox Technologies, are striving to establish 32 gigabits per second (Gbps) FC as the preferred fabric for NVMe flash. Since the initial development of NVMe-oF, there have been multiple implementations of the protocol, such as NVMe-oF using remote direct memory access (RDMA), FC or Transmission Control Protocol/Internet Protocol (TCP/IP). © Secbay Press

151 | Page

Secbay Press

3.2.5 Redundant array of inexpensive disks (RAID) When a hard disk crashes, all the data on the disk is often lost. Certain professionals may be able to recover some of the data, but this can be time-consuming and costly, and the professionals may not be able to recover the data that you truly require. Redundant Array of Inexpensive Disks (RAID) is a technique that was created to solve the problem of hard disk failure. RAID is a technique that has been in use for several decades. In reality, the technology underpinning RAID predates the word itself. The initial idea of RAID 1, also known as RAID 0, was to alleviate the loss of a hard disk by having a second, entirely redundant disk. There are various types of RAID besides RAID 1. The following sections discuss the RAID kinds covered by the CompTIA Cloud+ test. RAID 0 •

Disk Striping



No redundancy or error detection

RAID 0 (called RAID Zero) is where you take a block of data to be stored and spread it across two or more disks. This process is called striping—the file is stored across more than one hard drive. You break a file into blocks of data and then stripe the blocks across disks in the system. Although RAID 0 is simple, it provides no redundancy or error detection, so if one of the drives in a RAID 0 array fails, all data in lost. However, since RAID 0 allows for parallel read and write operations, it is very fast and is often found in storage-intense environments such as many database applications where storage speed can often be a bottleneck. RAID 1 •

Disk Mirroring- minimum of 2 disks, redundancy, and failover



Lose 50 percent of storage space

RAID 1 necessitates a minimum of two disks for operation and delivers data redundancy along with failover capabilities. It duplicates data, performing both read and write operations on each disk simultaneously. In the event of a disk failure, the complete file is still intact on the operational disk. When the IT team replaces the faulty disk, the RAID system seamlessly restores the mirrored data onto the new drive. Additionally, RAID 1 enhances read performance. While it does consume more usable storage capacity, it serves as a cost-effective failover solution for application servers.

© Secbay Press

152 | Page

Secbay Press RAID 5

Figure 31 RAID 5



Disk Striping with Parity



Block Level, each stripe has own parity block.



Can survive loss of one disk.

This RAID level employs block-level striping and parity. Parity consists of raw binary data, and the RAID system computes these values to generate a parity block, crucial for recovering striped data in the event of a drive failure. Typically, RAID systems with parity features store these parity blocks on the disks within the array. While some RAID systems allocate a dedicated disk for parity calculations, this approach is uncommon. In RAID 5, parity blocks are stored on the striped disks, with each stripe having its dedicated parity block. RAID 5 is designed to tolerate the failure of a single disk within the array. RAID 5 combines the performance of RAID 0 with the redundancy of RAID 1, but takes up a lot of storage space to do it – about one third of usable capacity. This level increases write performance since all drives in the array simultaneously serve write requests. However, overall disk performance can suffer from write amplification, since even minor changes to the stripes require multiple steps and recalculations.

© Secbay Press

153 | Page

Secbay Press RAID 6

Figure 32 RAID 6



RAID 6 operates like RAID 5 with parity and striping



Minimum of 4 disk required



Higher redundancy than RAID 5 and increased read perf. a

RAID 6 functions similarly to RAID 5, employing distributed parity and striping. The key operational distinction in RAID 6 is that it necessitates a minimum of four disks in the array, and it stores an extra parity block on each disk. This configuration allows for the failure of two disks before rendering the array unavailable. RAID 6 is typically employed in application servers and large storage arrays.

RAID 6 offers greater redundancy compared to RAID 5 and improved read performance. It may, however, encounter similar performance overhead during intensive write operations. The degree of performance impact is contingent on various factors, including the RAID system's architecture (hardware or software), whether it resides in firmware, and whether the system incorporates performance-enhancing software for parity calculations.

© Secbay Press

154 | Page

Secbay Press RAID 10

Figure 33 RAID 10



Minimum of 4 disk in the array



Stripes across disks mirrors for redundancy



Most expensive with lowest capacity and high cost

RAID 10 mandates a minimum of four disks within the array. It employs striping across disks to enhance performance and mirrors data for redundancy. In a four-drive array, the system stripes data to two of the disks, while the remaining two disks mirror the striped disks, each one storing half of the data. RAID 10 is ideal for environments necessitating both robust data security and high performance, particularly applications like high-transaction databases that store sensitive information. It is the costliest among RAID levels, resulting in reduced usable capacity and higher system expenses.

3.2.6 Storage system features Compression The practice of lowering the size of data using a mathematical technique is known as compression. The method is applied to the data, resulting in compressed data that is stored on the cloud (or on-premises). When the data is required, another process converts it back to its original uncompressed format. Compression reduces the amount of space you utilize in the cloud storage environment, which lowers your prices. In other circumstances, particularly with object storage, the cloud vendor handles compression. In other circumstances, such as data kept in file storage, the client is responsible for encrypting the data.

© Secbay Press

155 | Page

Secbay Press Deduplication Data deduplication is the process of eliminating redundant data from a storage resource (or between storage resources). A cloud vendor may have tools to accomplish this procedure in some cases, but in many cases, you will need to develop your own solutions to perform deduplication operations. By removing redundant data, you use less cloud storage, which results in lower expenses. Extensive datasets often contain redundant data, leading to increased data storage expenses. For instance, in user file shares, numerous users might store multiple copies or versions of identical files. Similarly, software development shares frequently include unchanged binaries across different builds. One approach to mitigating data storage costs is by enabling data deduplication for your file system. Data deduplication works to reduce or eliminate redundant data by storing duplicated portions of the dataset just once. Since data deduplication operates as a background process, it has minimal impact on your file system's performance. Moreover, it remains inconspicuous to your users and connected clients. Once activated, data deduplication continuously and autonomously scans and optimizes your file system behind the scenes. The extent of storage savings achievable through data deduplication hinges on the characteristics of your dataset, particularly the degree of duplication present across files. On average, generalpurpose file shares typically yield savings of 50–60 percent. Savings within shares can vary, ranging from 30–50 percent for user documents to 70–80 percent for software development datasets. Thick Provisioning Thick provisioning represents a form of storage pre-allocation in virtual environments. When opting for thick provisioning, the entire virtual disk storage capacity is allocated in advance on the physical storage infrastructure during the virtual disk's creation. A thickprovisioned virtual disk immediately occupies all the allocated space within the data store, rendering this space inaccessible for utilization by other virtual machines. Thick provisioning encompasses two sub-types: A Lazy zeroed disk - A Lazy zeroed disk is a type of disk that fully allocates all its space now of creation. However, this allocated space may still contain remnants of old data on the physical storage media. These remnants are not immediately erased or overwritten, so a process known as "zeroing out" is required before new data can be reliably written to these blocks. While creating this type of disk is quicker, its initial write performance can be lower due to the increased number of IOPS (input/output operations per second) necessary for handling new blocks. A Eager zeroed disk - Eager zeroed disk is a type of disk that acquires all the necessary space now of creation, and this space is completely wiped of any previous data on the physical storage media. While creating eager zeroed disks takes more time due to the writing of zeroes across the entire disk, they exhibit faster performance during initial write operations. This sub-type of thick-provisioned virtual disk is suitable for clustering features, including fault tolerance support.

© Secbay Press

156 | Page

Secbay Press Thin Provisioning Eager zeroing is more common than lazy zeroing with thick-provisioned virtual disks. Why? When you delete a VMDK, the data on the data store is not totally erased; the blocks are simply marked as available, until the operating system overwrites them. If you create an eager zeroed virtual disk on this data store, the disk area will be totally erased (i.e., zeroed), thus preventing anyone with bad intentions from being able to recover the previous data – even if they use specialized third-party software. Thin provisioning is another type of storage pre-allocation. A thin-provisioned virtual disk consumes only the space that it needs initially and grows with time according to demand. For instance, if you generate a new 30GB virtual disk with thin provisioning and transfer 10 GB of files to it, the resulting VMDK file size will be 10 GB. In contrast, opting for a thick-provisioned disk would have resulted in a 30GB VMDK file. Thin-provisioned virtual disks are quick to create and useful for saving storage space. The performance of a thin-provisioned disk is not higher than that of a lazy zeroed thick-provisioned disk, because for both disk types, zeroes must be written before writing data to a new block. Note that when you delete your data from a thin-provisioned virtual disk, the disk size is not reduced automatically. This is because the operating system deletes only the indexes from the file table that refer to the file body in the file system; it marks the blocks that belonged to "deleted" files as free and accessible for new data to be written onto. This is why the removal of files appears instantly. If it involved a complete deletion process, which required writing zeroes over the blocks previously occupied by the deleted files, it would take approximately as long as copying the files in question. See the simplified illustration below. Replication is the transfer and synchronization of data between multiple datacenters. For disaster recovery purposes and data security, your data must be transferred, or replicated, between datacenters. Remote copies of data have traditionally been implemented with storage backup applications. With the virtualization of servers in the cloud, it is now possible to replicate entire VM instances, including all applications, service packs, and content, to a remote facility.. Later in this chapter we will discuss synchronous and asynchronous replication approaches and the advantage of each implementation.

Replication The practice of ensuring that data is backed up to another storage resource is known as data replication. If your data is lost or becomes unavailable on the original data resource, it will be available on the backedup resource. Data replication in the cloud can be done on a zonal or regional scale. As with many of the features covered in this section, there are certain circumstances where cloud vendors will offer replication as an automatic function and others where the cloud customer will be responsible. In either instance, replication will increase costs, but it will also avoid data loss.

3.2.7 User quotas ●

Minimum maximum allocation of compute resources CPU, RAM, Storage



Hard limit – maximum value for resource



Soft limit – cloud provider allows for additional resources

© Secbay Press

157 | Page

Secbay Press Each VM will have a minimum and maximum allocation of compute resources. The virtual machines will have a guaranteed floor of the number of resources such as RAM, CPU, storage, and LAN capacity as well as a quota assigned for the maximum available that can be either a hard or soft limit, as we discussed. These entitlements, or quotas as they are also called, are defined in the creation of each VM on a physical server and consume the virtualized resources of the physical server. The cloud provider will usually offer many options for each VM at the time of purchase, including the operating system version and its CPU, RAM, storage, and LAN capabilities. The cost of the services is usually based on the configuration of the VM. Hard Limit, Soft Limit ●

When discussing entitlements, or quotas as they are also referred to, you need to understand the concept of hard and soft limits. We discussed this concept earlier in the chapter and will now step back and review limits as they relate to the virtual machines that are running on the hypervisor.



Since each resource is finite—after all, there are only so many CPU cores and only so much RAM, storage, and LAN resources available—we must assign limits available to each VM. This is done to prevent one virtual machine from monopolizing all the available resources and starving the other VMs running on the same server.



However, it is often desirable to allow a VM to exceed its quota to prevent it from not being able to function due to loss of resources.



A hard limit is just as the term implies: once the maximum value defined for the resource is reached, nothing more is allocated. Take, for example, a VM that has 256 GB of storage allocated to it and that storage reaches its limit. With a hard limit, no additional storage will be allocated. The cloud customer would need to either purchase additional storage or delete files on the volume to free up storage.



With a soft limit, when the storage capacity is reached, the cloud provider will allow for additional storage to be allocated and used. There will generally be an additional charge applied in the form of a fine, penalty, or fee for the additional storage consumed.



Hard and soft limits will, of course, also apply to other resources on the server, such as processing power, RAM, and LAN usage.

3.2.8 Hyperconverged •

IT framework that combines storage computing and networking into a single system



Hypervisor for virtualized computing



SDS- Software Defined Storage



Virtualized Networking



Hyper-converged infrastructure

Hyperconvergence is an IT architecture that integrates storage, computing, and networking into a unified system, aiming to simplify data center operations and enhance scalability. Hyperconverged platforms © Secbay Press

158 | Page

Secbay Press incorporate a virtualization hypervisor for computing, software-defined storage, and virtualized networking. These platforms are often deployed on standard, readily available servers. By clustering multiple nodes, organizations can create shared pools of compute and storage resources, providing a more convenient infrastructure. Leveraging cost-effective commodity hardware, backed by a single vendor, results in an infrastructure that prioritizes flexibility and streamlined management over the complexities of traditional enterprise storage setups. This makes hyperconvergence an attractive option for IT leaders initiating data center modernization initiatives, as it offers the flexibility reminiscent of public cloud infrastructure while retaining control over on-premises hardware resources. Identity Management •

Strength of Identity- i.e., certainty of validating the correct user



Federated identity management systems widely used with SaaS services



Many IaaS providers and SaaS providers for Identity Management

Important points to consider are: The strength of identity that can be verified/validated helps in risk calculation when interacting with the persona. The attributes of a persona, as with identity, have a strength by which an attribute can be verified/validated. Identity and attributes should be consumed from multiple sources. There will be instances when a transient identity is sufficient. There will be instances in which pseudo-anonymity is desirable (such as voting). ●

Some general uses of federation:



Widely used with SaaS services



PaaS depends a lot on the provider



Many companies design their own federated service



There are many IaaS providers who can be leveraged

3.2.9 Software-defined storage (SDS) Software-defined storage is a storage technique that is intended to make storage more accessible. Many cloud companies have many storage devices that are challenging to handle on their own. SDS "collects" these storage units into a single huge storage collection (or more). The SDS program is invoked when space is required to provision a block storage resource, a file storage resource, or an object storage resource. The SDS program oversees determining where the real data is kept.

© Secbay Press

159 | Page

Secbay Press

3.3 Given a scenario, deploy cloud networking solutions. 3.3.1 Services Dynamic host configuration protocol (DHCP) •

DHCP- Dynamic Host Config Protocol – Automatic Assignment of IP addresses eliminates the need to statically configure addressing. • DHCP offer. • Listens on port 68. DHCP the Dynamic Host Configuration Protocol, or DHCP, allows for automatic assignment of IP addressing information to devices on a network. This eliminates the need to statically configure addressing information when you connect to a network. DHCP listens on port No. 68. The Dynamic Host Configuration Protocol (DHCP) is a network management protocol utilized on Internet Protocol (IP) networks. With DHCP, a server dynamically allocates IP addresses and various network configuration settings to every device within the network, facilitating their communication with other IP networks. This eliminates the need for network administrators or users to manually assign IP addresses to each network device, as computers can autonomously request IP addresses and related parameters from the Internet service provider (ISP). In cases where a DHCP server is absent, devices within the network must either be assigned IP addresses manually or resort to self-assigning Automatic Private IP Addressing (APIPA) addresses, which restrict communication to the local subnet. DHCP can be employed in networks spanning from small home setups to extensive campus and regional ISP networks. Routers or residential gateways can be configured to operate as DHCP servers, with residential network routers typically receiving unique global IP addresses within the ISP network. Within a local network, a DHCP server assigns a local IP address to each device linked to the network. IETF standards documents • • • • • • • • • • •

RFC 2131, Dynamic Host Configuration Protocol RFC 2132, DHCP Options and BOOTP Vendor Extensions RFC 3046, DHCP Relay Agent Information Option RFC 3397, Dynamic Host Configuration Protocol (DHCP) Domain Search Option RFC 3942, Reclassifying Dynamic Host Configuration Protocol Version Four (DHCPv4) Options RFC 4242, Information Refresh Time Option for Dynamic Host Configuration Protocol for IPv6 RFC 4361, Node-specific Client Identifiers for Dynamic Host Configuration Protocol Version Four (DHCPv4) RFC 4436, Detecting Network Attachment in IPv4 (DNAv4) RFC 3442, Classless Static Route Option for Dynamic Host Configuration Protocol (DHCP) version 4 RFC 3203, DHCP reconfigure extension. RFC 4388, Dynamic Host Configuration Protocol (DHCP) Leasequery

© Secbay Press

160 | Page

Secbay Press • •

RFC 6926, DHCPv4 Bulk Leasequery RFC 7724, Active DHCPv4 Lease Query

NTP- NETWORK TIME PROTOCOL •

The Network Time Protocol (NTP) is widely used on the Internet to synchronize computer clocks to national standard time.



NTP Version 4 is the current version.



Available on the public internet



RFC 5905

Log files contain timestamps encoded within log entries. These timestamps are frequently required to determine when a mistake or security breach happened. Incorrect system times will result in incorrect timestamps, which can pose challenges in pinpointing the cause of problems as well as potential legal issues (log files are sometimes used in legal cases but can be disregarded by the court system if the timestamps are not accurate). Client/server interactions: Some services necessitate that the client and server systems be in sync in terms of system time. If these systems do not communicate with one another, the service may fail totally. For example, while negotiating a secure connection using digital certificates, both the server and the client should have the correct system time. Searches for file by timestamp: Users and administrators frequently use timestamps to hunt for lost or missing files. If the system time is incorrect, the file timestamps will be incorrect, making it harder to locate files. Transaction log timestamps: Timestamps are used in several transaction operations. For example, each email sent or received includes a timestamp for these actions. Banking and credit card transactions are another example. It is vital to ensure that these timestamps are as exact as possible for transaction security and reliability. The Network Time Protocol (NTP) is a widely utilized tool on the Internet for synchronizing computer clocks with national standard time. NTP's architecture, protocol, and algorithms have evolved over more than three decades to reach the NTP Version 4 specification. Reference implementations of NTP are available for various operating systems, including UNIX, VMS, and Windows. NTP time synchronization services are prevalent on the public Internet, boasting several thousand servers scattered across most countries and continents, even reaching locations like Antarctica, outer space, ships, and the sea floor. These servers collectively serve an estimated population of over 25 million computers connected to the global Internet. Years of funded research have led to substantial improvements in the accuracy and reliability of the NTP protocol and its supporting algorithms. In today's Internet, NTP is employed across a diverse range of computing devices, from personal workstations to supercomputers. It offers timing accuracies typically ranging from 0.1 milliseconds on fast LANs and high-performance computers to a few tens of milliseconds in intercontinental Internet connections.

© Secbay Press

161 | Page

Secbay Press The current NTP Version 4 specification, outlined in RFC-5905, and its corresponding reference implementation represent a noteworthy enhancement over the original NTP Version 3 specification developed in 1992. These enhancements encompass features such as autonomous configuration and autonomous authentication, which facilitate automatic server discovery and secure server authentication using both symmetric key and public key cryptography, as detailed in RFC-5906.

DNS The domain name system (DNS) is a database that holds information about internet domain names and their corresponding internet protocol (IP) addresses. It serves as the system that translates the humanreadable domain names people use to find websites into the numerical IP addresses that computers use to locate those websites. • • • • •

Domain Name Server TCP port 53 for Zone Transfer UDP port 53 for queries Different records Tools for querying DNS- nslookup, dig, etc.

DNS- Domain Name System- The domain name system (DNS) is a database that stores internet domain names and translates them into internet protocol (IP) addresses. DNS serves the purpose of mapping the user-friendly domain names people use to access websites to the numerical IP addresses that computers use to pinpoint those websites. Understanding DNS, DNS stands for Domain Name Server or Domain Name Services DNS Client or computer send a request out to the DNS Server, can you give me the IP address of the hostname blogs.getcertifiedgetahead.com ● ● ● ●

● ● ● ● ●

© Secbay Press

In the slide here you can see the various types of DNS records An A record is an IPv4 host to IP address mapping. A-records are the DNS server equivalent of the hosts file - A AAAA record is for an IPv6 host. CNAME (Canonical name for an alias) MX (Mail exchange) MX-records are used to specify the e-mail server(s) responsible for a domain name. Each MX-record point to the name of an e-mail server and holds a preference number for that server. When a domain name is managed by multiple email servers for backup or redundancy, each email server is assigned a separate MX record. These MX records include preference numbers, which dictate the order in which other email servers should utilize these servers. Lower preference numbers take precedence and are used first in the email delivery process. NS (Name Server) NS-records identify the DNS servers responsible (authoritative) for a zone. A zone should contain one NS-record for each of its own DNS servers (primary and secondary). This is mostly used for zone transfer purposes (notify messages). NS records share the same name as the zone in which they are found. The primary role of the NS record is to facilitate delegation.

162 | Page

Secbay Press Content delivery network (CDN) •

Highly distributed platform of servers that help minimize delays in loading webpage content



Without a CDN, servers must respond to every user request



Goal of CDN is used to reduce latency, delay between submitting request and webpage loading

A Content Delivery Network (CDN) is a widely distributed network of servers designed to reduce latency in loading web page content by minimizing the physical distance between the server and the user. This enhances the user experience by providing quick access to high-quality content across the globe, eliminating slow loading times. In the absence of a CDN, content origin servers are responsible for handling each individual end user request. This places a substantial burden on the origin server, leading to increased traffic and potentially overloading it, especially during traffic spikes or prolonged high loads. This scenario can result in origin server failures. CDNs address this issue by responding to end user requests on behalf of the origin server and doing so from locations that are physically and network-wise closer to the end user. This strategic placement of servers reduces the load on content servers and significantly improves the web browsing experience. As a result, both content providers and end users benefit from the enhanced performance facilitated by CDNs. How does a CDN work? Over half of the internet’s traffic is served by a content delivery network (CDN). The goal of the CDN is to reduce latency – the delay between submitting a request for a web page and the web page fully loading on your device – by reducing the physical distance that the request must travel. To address this challenge, CDNs employ a strategy of storing a cached version of your website content in numerous geographical locations worldwide, often referred to as "points of presence" (PoPs). Each of these PoPs operates its own caching servers and is responsible for delivering content to users in its respective region. When a user-agent, essentially a device running a web browser, requests content required for rendering web pages, such as HTML, images, CSS, and JavaScript files, the CDN comes into play. In most CDN setups, each content request results in the user being directed to a strategically located CDN server, which responds by providing the cached (previously saved) version of the requested files. If these files cannot be found, the server searches for the content across other servers within the CDN network and then sends the response back to the end user. However, when content is unavailable or outdated, the CDN serves as an intermediary, forwarding the request to the origin server and storing the retrieved content for future requests. Although the delivery of website content is a prevalent use case for CDNs, they are not limited to this role. In fact, CDNs can deliver a diverse range of content, including high-quality video in 4K and HD, audio streams, software downloads like applications, games, and operating system updates, data © Secbay Press

163 | Page

Secbay Press records containing sensitive information such as medical and financial data, and much more. Essentially, any form of data that can be digitized can be efficiently distributed through a CDN.

IP address management (IPAM) •

IP Address Management (IPAM) is a comprehensive set of tools designed to facilitate the entire process of planning, deploying, managing, and monitoring your IP address infrastructure. It offers a user-friendly interface for seamless IP address infrastructure management.



IPAM comes equipped with automatic discovery capabilities, allowing it to identify IP address infrastructure servers and Domain Name System (DNS) servers within your network. Furthermore, it provides centralized management of these servers, streamlining the administration process.

As businesses expand, the number of devices connected to their networks can increase significantly, posing challenges for IP address management. While smaller businesses may rely on manual methods like spreadsheets for managing IP addresses and network data, larger enterprises often turn to IP address management (IPAM) software. IPAM software plays a crucial role in tracking IP addresses and handling network data, including translation between DNS and DHCP address information. Essentially, it automates much of the IP address management process and presents it through a user-friendly interface. This automation is vital for maintaining smooth business connectivity and facilitating proactive network configuration. With a wide range of IPAM tools available, choosing the right one for your business can be daunting. To assist you in making an informed decision, I've compiled a list of both paid and free IPAM tools suitable for small businesses and large enterprises alike. My top recommendation is SolarWinds IP Address Manager, recognized for its extensive feature set, intuitive integrated interface, and the ability to manage IP, DHCP, and DNS information centrally. Today, there is a variety of IPAM software options available, each with its own advantages and limitations. The choice depends on the specific needs of your business.

3.3.2 Virtual private networks (VPNs) •

Virtualization- to create a virtual version of a device, resource, OS.



Hypervisor • •

Type 1-bare metal Type 2- Hosted Hypervisor- requires software and runs on physical hardware.

Virtualization-is to create a virtual version of a device or resource like an operating system, network, or storage device. Virtualization is making head road in three major areas of IT, server virtualization, storage virtualization, and server virtualization. The core advantage of virtualization is that it enables the management of workloads by significantly transforming traditional computing to make it more scalable. Virtualization allows CSPs to provide server resources as a utility rather than a single product. © Secbay Press

164 | Page

Secbay Press Hypervisor: (Virtual Machine Monitor/ VMM) is a software application that manages multiple operating systems thus enabling physical devices to share resources among virtual machines (VMs) running on top of the physical hardware. The hypervisor manages the system’s processor, memory, and other resources. A Type 1 Hypervisor or Bare Metal Hypervisor runs on top of the physical hypervisor. 1. Bare Metal Hypervisor – A Type 1 Hypervisor or Bare Metal Hypervisor runs on top of the physical hypervisor. 2. A Type 2 Hypervisor or Hosted Hypervisor runs within an operating system that’s running on the physical hardware. Hypervisor is a software application that manages multiple operating systems thus enabling physical devices to share resources among virtual machines (VMs) running on top of the physical hardware. The hypervisor manages the system’s processor, memory, and other resources. Exploring Remote Access: VPN and VPN concentrators- You can see from this slide that some VPN will provide tunnel from ISP to VPN Server in the DMZ. Tunneling Protocols- IPSEC is a tunneling protocol-Provides 2 components the AH or Authentication Header and ESP- Encapsulating Security Payload (ID Protocol 51 for AH, 50 FOR ESP) Uses tunnel mode for VPNs with IKE over port 500. TLS is also a tunneling protocol and can use VPN over NAT. A basic VPN routes all your internet traffic through an encrypted tunnel to protect your data from hackers and third-party snoops. In cases where your VPN lacks support for split tunneling, you might encounter the following scenarios: ● ● ●

Inability to access both foreign and local internet services simultaneously. Utilization of your entire bandwidth due to routing all data through the VPN. Loss of access to devices on your local area network (LAN) while connected to the VPN.

With a split-tunneling VPN, you can: ● ●

Enjoy foreign movies through streaming while simultaneously using web services from local IP addresses. IPSEC- IP Security- end to end encryption between 2 internet connected computers, commonly used with VPN

With a split-tunneling VPN, you can: ● ● ●

Stream foreign movies while using web services from local IP addresses Download safely without slowing down your other web activity Access your network printer while surfing the internet securely

Full Tunnel- All traffic is encrypted- can route traffic through UTM in private network Site to Site VPNs can be used to have gateways setup as permanent VPN servers- IPSEC tunneling is © Secbay Press

165 | Page

Secbay Press the example depicted here connecting a HQ to a Remote Office siteThere are also always on VPNs that provide regular connectivity for users and mobile devicesTunneling Protocols- IPSEC is a tunneling protocol-Provides 2 components the AH or Authentication Header and ESP- Encapsulating Security Payload (ID Protocol 51 for AH, 50 FOR ESP) Uses tunnel mode for VPNs with IKE over port 500. TLS is also a tunneling protocol and can use VPN over NAT.

Figure 34 Virtual Private Network

Site-to-site A site-to-site relationship A VPN is used to connect two sites, which can be a headquarters (HQ) office and a remote office, or two HQ offices. This type of network is generally utilized when a business has multiple distant offices (sites) that are separated by a large distance (further apart than a LAN). The cost of a dedicated network connection between the remote offices is too expensive, yet a secure mode of communication is necessary. When a VPN is set up between multiple locations on the Internet, communications are encrypted at a low cost.

Point-to-point A VPN connection is formed between two hosts in a point-to-point VPN. When you use a VPN service (such as Nord-VPN or ExpressVPN) to preserve your anonymity when surfing the Internet, here is an example of this. Some businesses may also set up a point-to-point VPN to allow a remote employee or customer access to a specific system, such as an internal web server or database.

Point-to-site A VPN connection is formed between a host and a network in a point-to-site VPN. Many businesses utilize this method to allow employees to connect to the company's private network from anywhere. This is also known as remote-access VPN, and the protocols used by these VPNs can be IPsec or SSL/TLS.

© Secbay Press

166 | Page

Secbay Press IPSec A VPN connection is formed between a host and a network in a point-to-site VPN. Internet Protocol Security is a group of protocols meant to allow for encrypted connections between various sites. The creation of a VPN between two hosts is a typical use of IPsec. IPsec is used for VPN connectivity on both a site-to-site and a point-to-site basis. Employees should be able to connect to the organization's private network remotely. This is also known as remote-access VPN, and the protocols used by these VPNs can be IPsec or SSL/TLS.

Multiprotocol label switching (MPLS) • • • •

Designed as an overlay technique to simplify and improve performance. MPLS lives in carrier backbones and enterprise networks. Common uses branch offices, campus networks organizations that need QoS In Between Layer 2 and 3

Multiprotocol Label Switching (MPLS) is a routing technique in telecommunications networks that directs data from one node to the next based on short path labels rather than long network addresses, thus avoiding complex lookups in a routing table and speeding traffic flows. The beauty of MPLS is that it’s not tied to any underlying technology. It was designed back in the days of ATM and frame relay as an overlay technique designed to simplify and improve performance -- that’s the “multi-protocol” part. MPLS (Multiprotocol Label Switching) is still in use today, especially in carrier backbones and enterprise networks, despite the decline of technologies like ATM and frame relay. It is commonly employed in various scenarios, including branch offices, campus networks, metro Ethernet services, and enterprises requiring quality of service (QoS) for real-time applications. The question often arises: Is MPLS a Layer 2 or Layer 3 service? The answer isn't straightforward as MPLS doesn't neatly fit into the OSI seven-layer model, leading to its occasional classification as Layer 2.5. One of MPLS's key advantages is its ability to separate forwarding mechanisms from the underlying data-link service. Essentially, MPLS can create forwarding tables for various underlying protocols. In practice, MPLS routers establish label-switched paths (LSPs), predefined routes for traffic within an MPLS network, based on criteria within the Forwarding Equivalence Class (FEC). Only after establishing an LSP can MPLS forwarding occur. LSPs are unidirectional, meaning that return traffic takes a different LSP. When an end user sends traffic into an MPLS network, an MPLS label is added by an ingress MPLS router positioned at the network's edge. This MPLS label comprises four sub-parts: 1. The Label: Contains information that MPLS routers use to determine the packet's forwarding destination. 2. Experimental: Used for Quality of Service (QoS) to specify the priority of the labeled packet. 3. Bottom-of-Stack: Indicates whether this router is the final leg of the journey with no further labels to consider, typically signifying an egress router.

© Secbay Press

167 | Page

Secbay Press 4. Time-To-Live: Identifies the maximum number of hops the packet can traverse before it's discarded. Pros and Cons of MPLS: Pros: ●

Scalability



Improved performance



Enhanced bandwidth utilization



Reduced network congestion



Enhanced end-user experience

MPLS is also inherently secure as it operates as a virtual private network, separate from the public Internet. This isolation makes it secure against denial-of-service attacks, which can affect purely IPbased networks.

Cons: ●

Cost: MPLS is more expensive than routing traffic over the public Internet.



Limited Global Coverage: Finding a global MPLS service provider can be challenging, as it often requires partnerships with other providers, resulting in increased costs.



Legacy Design: MPLS was created for an era where branch offices primarily sent traffic to main headquarters or data centers, not for today's environment where branch offices require direct access to the cloud.

Is MPLS Still Relevant? In 2013, Gartner posed this thought-provoking question and provided its own answer, predicting that MPLS (Multiprotocol Label Switching) would remain a crucial component of wide area networks (WANs). However, it foresaw most enterprises gradually transitioning to a hybrid environment, incorporating both MPLS networks and the public Internet. MPLS would maintain its importance in connecting specific point-to-point locations, such as large regional offices, retail establishments with point-of-sale systems, regional manufacturing facilities, and multiple data centers. Additionally, it would remain essential for real-time applications. Yet, enterprise WAN architects needed to weigh the risks and rewards associated with MPLS's top-notch but expensive performance against the cheaper but less reliable performance of the Internet. This deliberation brings us to a fascinating new technology: SD-WAN. MPLS vs. SD-WAN Amid the buzz, there's a notion that cost-effective, flexible SD-WAN would render MPLS, the seemingly sluggish dinosaur, obsolete. However, both technologies have their roles to play in modern WANs. SD-WAN leverages Software Defined Networking (SDN) principles within the WAN, deploying SD-WAN © Secbay Press

168 | Page

Secbay Press edge devices to apply rules and policies that determine the best path for traffic. SD-WAN functions as a transport-agnostic overlay capable of routing any traffic type, including MPLS. Its advantage lies in enabling an enterprise WAN-traffic architect to centrally apply policies across all WAN devices. CLOUD How does SD-WAN work? By contrast, with MPLS predetermined routes need to be painstakingly provisioned and once the fixed circuits are up, making changes is not a point-and-click exercise. But once an MPLS network is deployed, it delivers guaranteed performance for real-time traffic. SD-WAN can route traffic along the most efficient path, but once those IP packets hit the open Internet, there are no performance guarantees. The most sensible strategy going forward will be to offload as much MPLS traffic as possible to the public Internet but continue to use MPLS for time-sensitive applications that require guaranteed delivery. Nobody wants to get caught in the crosshairs when the CEO’s monthly videoconference with branch office employees drops off mid-sentence. More about MPLS: •

MPLS or IPsec VPN: Which is better?



IS SD-WAN better than MPLS?



Internet VPN or MPLS for branch office IP phone communications?

3.3.3 Virtual routing Dynamic and static routing The routing table in dynamic routing is modified in response to network changes. When routes become delayed or unavailable, a dynamic routing system can "self-heal." The routing table is manually handled with static routing. The routing table must be manually edited by an administrator. Because cloud settings change so frequently, virtual routing is frequently dynamic. As clients build new IP networks, virtual routing tables must be automatically updated to handle network changes.

Virtual network interface controller (vNIC) A network interface controller (NIC) is a physical device that allows a host to communicate with other hosts over a network. Most cloud resources are virtual in nature and must share a network interface card (NIC) to communicate on the network. The resource is given a vNIC that is linked to the physical NIC.

Subnetting To understand subnetting, you must first understand the fundamentals of IPv4 addresses. An IPv4 address is made up of four digits separated by a dot character, such as 192.168.100.25. Each number © Secbay Press

169 | Page

Secbay Press is a reprint of an octet, which can be represented by a binary value: 11000000.10101000.01100100.00011001 ●

Virtual Routing



Static Routing – only feasible in small networks



Dynamic- Uses routing protocols, OSPF, EIGRP, BGP

Each VPC network utilizes a scalable and distributed virtual routing mechanism, without the assignment of a physical device to the network. While some routes can be selectively applied, the routing table for a VPC network is defined at the network level. Dynamic Routing Protocols Routing protocols facilitate routers automatically sharing their optimal paths to known networks. Routers leverage this information to determine their own best routes to these destinations. In response to network changes, such as link failures or the addition of new subnets, routers communicate with one another, automatically recalculating the best paths and updating the routing table to reflect these alterations. Routing protocols are more scalable than administrator defined static routes. Moreover, using purely static routes is only feasible in very small environments. Reasons for using Summary Routes: •

Summary routes lead to less memory usage in routers as their routing tables contain less routes. In our example, it wouldn't make much difference, but this can make a big difference in large networks.

The reasons for using Dynamic Routing Protocols are the following: ●

Automatic Subnet Advertisement: Dynamic routing protocols enable routers to automatically share information about available subnets with each other. This eliminates the need for administrators to manually input routes on each router, which can be a cumbersome and timeconsuming process when using static routing.



Adaptation to Subnet Changes: If a subnet is added or removed from the network, routers equipped with dynamic routing protocols will autonomously detect and accommodate this change by updating their routing tables.



Fault Tolerance: Dynamic routing protocols enhance network fault tolerance. In the event that the best path to a subnet becomes unavailable, routers with dynamic routing capabilities can detect this and calculate an alternative best path if one is available. In contrast, static routes require manual configuration by administrators and are less resilient in case of failures.



In practice, a combination of dynamic routing protocols and static routes is often employed. Dynamic routing protocols handle most of the network information, while static routes may be

© Secbay Press

170 | Page

Secbay Press used selectively, such as for backup purposes or for establishing a static route to the Internet. These static routes are typically injected into the dynamic routing protocol and shared with other routers in the network. ●

The edge router you can propagate that default static route into the routing protocol and then you can have the routing protocol carry it through the rest of your network. There will be no need to configure a default static route on all of your routers, just the one that is on the edge.

3.3.4 Network appliances Load balancers •

Subnetting- breaking up network into smaller networks.



Virtual Appliances in the Cloud



Load Balancers- manage traffic amongst servers/clients.

Additional Information If definitions are helpful to you, use these vocabulary terms to get you started: ●

Address: A unique numeric identifier assigned to a host or interface within a network.



Subnet: A segment of a network that shares a specific subnet address.



Subnet mask: A 32-bit combination used to determine which portion of an address refers to the subnet and which part pertains to the host.



Interface: A network connection or communication point.

If you've already obtained your valid address from the Internet Network Information Center (InterNIC), you're ready to proceed. However, if you don't intend to connect to the Internet, Cisco strongly recommends using reserved addresses specified in RFC 1918.

Firewalls • • • • • •

Software based cloud deployed network based. Control access between networks Stateful Packet Inspection Packet Filters Next Gen Firewalls Web Application Firewall

Cloud firewalls are network security solutions that operate in the cloud, offering protection against unauthorized access to private networks. These software-based devices are specifically designed to address the requirements of contemporary businesses and are integrated into online application environments. A virtual firewall in the cloud that protects your cloud infrastructure and services is an entirely different beast from FWaaS designed to protect your network perimeter and remote or traveling end users. ... © Secbay Press

171 | Page

Secbay Press They protect your cloud servers against malicious traffic or attacks coming from the outside. There are three basic types of firewalls that are used by companies to protect their data & devices to keep destructive elements out of the network, viz. Packet Filters, Stateful Inspection and Proxy Server Firewalls.

3.3.5 Virtual private cloud (VPC) •

VPC networks



Routes and Firewalls rules



Global Resources



VPCs can communicate with each other.



Cloud VPC



Google API



AWS API

A Virtual Private Cloud (VPC) network is a digital representation of a physical network, deployed within Google's production network infrastructure using Andromeda technology. VPC networks offer the following capabilities: •

Connectivity: They enable connectivity for various Google Cloud products like Compute Engine virtual machine (VM) instances, Google Kubernetes Engine (GKE) clusters, App Engine flexible environment instances, and other services built on Compute Engine VMs.



Load Balancing: VPC networks support native Internal TCP/UDP Load Balancing and proxy systems for Internal HTTP(S) Load Balancing.



Integration: They can connect to on-premises networks using Cloud VPN tunnels and Cloud Interconnect attachments.



Traffic Distribution: VPC networks facilitate the distribution of traffic from Google Cloud external load balancers to backend resources.

Projects can contain multiple VPC networks, and new projects typically start with a default network (an auto mode VPC network) that includes one subnet. Key Specifications of VPC Networks: •

Global Resources: VPC networks, along with their associated routes and firewall rules, are global resources and are not tied to a specific region or zone.



Regional Subnets: Subnets, which define IP address ranges, are regional resources.

© Secbay Press

172 | Page

Secbay Press • •

Firewall Rules: Traffic to and from instances can be controlled using network firewall rules, which are implemented on the VMs themselves. Internal Communication: Resources within a VPC network can communicate using internal IPv4 addresses, subject to applicable network firewall rules.



Access to Google APIs: Instances with internal IP addresses can communicate with Google APIs and services.



Security: Network administration can be secured through Identity and Access Management (IAM) roles.



Shared VPC: Organizations can use Shared VPC to maintain a VPC network in a common host project, allowing authorized IAM members from other projects in the same organization to create resources using subnets from the Shared VPC network.



VPC Peering: VPC networks can be connected to other VPC networks in different projects or organizations through VPC Network Peering.



Hybrid Environments: VPC networks support secure connections in hybrid environments using Cloud VPN or Cloud Interconnect.



GRE Traffic: VPC networks support GRE traffic (beta) for services like secure access service edge (SASE) and SD-WAN. However, some exceptions apply, such as traffic on Cloud VPN, Cloud Interconnect, Cloud NAT, and forwarding rules for load balancing and protocol forwarding.



IPv4 Support: VPC networks exclusively support IPv4 unicast traffic and do not handle broadcast, multicast, or IPv6 traffic within the network. VMs in the VPC network can send and receive traffic only to and from IPv4 sources and destinations. However, it is possible to create an IPv6 address for a global load balancer.

Hub and spoke. •

Hub- virtual network that is the central point of connectivity.



Spoke are virtual networks that peer with the hub



Can isolate workloads.



Traffic flows between on premise and hub through express route or VPN

This reference architecture illustrates the setup of a hub-and-spoke topology within the Azure cloud environment. In this architecture, the hub represents a virtual network in Azure that serves as a central point for connecting with your on-premises network. The spokes, on the other hand, are virtual networks that establish peering connections with the hub and can be employed to segregate workloads. Traffic between the on-premises datacenter and the hub is facilitated via an ExpressRoute or VPN gateway connection.

© Secbay Press

173 | Page

Secbay Press Peering •

Direct connection between company network and cloud provider



High throughput traffic



Must request egress pricing once peering is enabled.



Direct Peering allows you to establish a direct peering connection connecting your corporate network to Google's edge network, facilitating the exchange of high-throughput cloud traffic.



This feature is accessible at more than 100 locations spread across 33 countries worldwide. You can find further details about Google's edge locations on Google's peering site.



Once set up, Direct Peering establishes a direct route from your on-premises network to Google services, encompassing various Google Cloud products that may be accessible via one or more public IP addresses. Traffic from Google's network to your on-premises network also follows this direct path, including traffic originating from VPC networks within your projects. Google Cloud customers need to request the activation of direct egress pricing for each of their projects after establishing Direct Peering with Google.

It's important to note that Direct Peering operates independently of Google Cloud. For accessing Google Cloud, unless you require access to Google Workspace applications, it's recommended to use Dedicated Interconnect or Partner Interconnect as the preferred methods.

3.3.6 VLAN/VXLAN/GENEVE •

VXLAN – Virtual Extensible LAN



VLAN – Virtual LAN – 802.1q



GENEVE- Generic Networking Virtualization Encapsulation

Virtual Extensible LAN (VXLAN) VXLAN is a network virtualization technology designed to address scalability challenges encountered in large cloud computing deployments. It employs a VLAN-like encapsulation method to wrap OSI layer 2 Ethernet frames within layer 4 UDP datagrams, typically utilizing port number 4789 as the default IANAassigned destination UDP port. The endpoints responsible for terminating VXLAN tunnels, whether they are virtual or physical switch ports, are referred to as VXLAN tunnel endpoints (VTEPs). VXLAN represents an advancement in the development of overlay encapsulation protocols, offering enhanced scalability with support for up to 16 million logical networks and facilitating layer 2 adjacency across IP networks. To propagate broadcast, unknown unicast, and multicast (BUM) traffic, VXLAN employs either multicast or unicast with head-end replication (HER). Originally conceived by VMware, Arista Networks, and Cisco, VXLAN technology has garnered support from various stakeholders in the industry, including Huawei, Broadcom, Citrix, Pica8, Big Switch Networks, Cumulus Networks, Dell EMC, Ericsson, Mellanox, FreeBSD, OpenBSD, Red Hat, and Juniper Networks. VXLAN's standardization was documented in RFC 7348 by the IETF. It employs the MAC-in-UDP packet encapsulation mode to restrict direct access to certain components of an object. © Secbay Press

174 | Page

Secbay Press Open vSwitch is an example of a software-based virtual network switch that provides support for VXLAN overlay networks. On the other hand, Generic Networking Virtualization Encapsulation (Geneve) is another encapsulation network protocol established by the IETF to unify the efforts of initiatives like VXLAN and NVGRE, aiming to streamline the proliferation of encapsulation protocols. Like VXLAN, Open vSwitch can also be used to implement Geneve overlay networks..

3.3.7 Single root input/output virtualization (SR-IOV) Peripheral Component Interconnect Express is a physical system connection (PCIe). You can use this connection to connect devices such as graphic cards, network cards, sound cards, and other similar devices. SR-IOV can be used in a virtual environment to portray a PCIe device as several virtual devices. This is accomplished by offering virtual functions that allow for distinct setups for each virtual machine. An SRIOV network card, for example, could be shown as four distinct network cards, each with its own network port that can be configured differently for four distinct virtual machines.

3.3.8 Software-defined network (SDN) Historically, networks were made up of physical hardware (routers, switches, firewalls, and so on) that were controlled by proprietary software that performed the necessary functions. The use of proprietary network hardware resulted in a number of constraints, which software-defined networking, or SDN, is intended to eliminate or minimize. These networking devices' principal function is to forward (or not forward, depending on the device's regulations) data packets from one network to another. To carry out these operations, a network administrator must manage the devices, which typically necessitates the usage of bespoke proprietary commands that run directly on the network device. SDN decouples network management activities (control plane) from network forwarding (data plane), resulting in a number of benefits, including: Centralized network management: SDN enables the use of a single system to manage a group of network devices, as opposed to having administration tied to each individual network device. Lower costs: SDN provides open and decoupled systems that can run third-party applications without the need for expensive proprietary interfaces. Integration capabilities can be built using APIs and SDKs. SDN devices can be made available as cloud-based services. Agility: One of the issues with legacy network devices is that a change in network structure necessitated the modification of one or more network devices (sometimes by manually logging in to the device and executing a command). SDN facilitates and, in many circumstances, automates these adjustments.

© Secbay Press

175 | Page

Secbay Press

3.4 In a given situation, determine the suitable compute resource allocation for a deployment. 3.4.1 Virtualization •

Virtualization- to create a virtual version of a device, resource, OS.



Hypervisor•

Type 1-bare metal



Type 2- Hosted Hypervisor- requires software and runs on physical hardware.

Virtualization-is to create a virtual version of a device or resource like an operating system, network, or storage device. Virtualization is making head road in three major areas of IT, server virtualization, storage virtualization, and server virtualization. The core advantage of virtualization is that it enables the management of workloads by significantly transforming traditional computing to make it more scalable. Virtualization allows CSPs to provide server resources as a utility rather than a single product. Hypervisor: (Virtual Machine Monitor/ VMM)-is a software application that manages multiple operating systems thus enabling physical devices to share resources among virtual machines (VMs) running on top of the physical hardware. The hypervisor manages the system’s processor, memory, and other resources. A Type 1 Hypervisor or Bare Metal Hypervisor runs on top of the physical hypervisor. Bare Metal Hypervisor – A Type 1 Hypervisor or Bare Metal Hypervisor runs on top of the physical hypervisor. A Type 2 Hypervisor or Hosted Hypervisor runs within an operating system that’s running on the physical hardware. Hypervisor is a software application that manages multiple operating systems thus enabling physical devices to share resources among virtual machines (VMs) running on top of the physical hardware. The hypervisor manages the system’s processor, memory and other resources.

Simultaneous multi-threading (SMT) •

Multithreading- ambiguous because not only can multiple threads be run on a CPU core but with different tasks.



Simultaneous Multi-Threading- is one of the two main implementations of multithreading



SMT is energy efficient even with low end processors.

Simultaneous multithreading (SMT) Simultaneous Multithreading (SMT) is a technique aimed at enhancing the overall efficiency of superscalar CPUs featuring hardware multithreading. SMT enables the concurrent execution of multiple independent threads of execution, effectively leveraging the resources offered by modern processor architectures. The term "multithreading" might seem ambiguous, as it not only allows multiple threads to run simultaneously on a single CPU core but also permits multiple tasks with distinct attributes (such as © Secbay Press

176 | Page

Secbay Press different page tables, task state segments, protection rings, I/O permissions, etc.) to operate independently, all on the same core. Despite sharing a core, these threads and tasks remain completely isolated from each other. Multithreading is conceptually akin to preemptive multitasking but operates at the thread level within contemporary superscalar processors. Simultaneous multithreading (SMT) is one of the primary implementations of multithreading, with the other form known as temporal multithreading (or super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In SMT, instructions from multiple threads can be executed concurrently in a single pipeline stage. Achieving this doesn't require significant alterations to the core processor architecture; it primarily necessitates the capability to fetch instructions from multiple threads in a single cycle and a larger register file to store data from multiple threads. The number of concurrent threads supported can vary, with two concurrent threads per CPU core being common, although some processors can accommodate up to eight concurrent threads per core. Due to the inevitable increase in resource contention among shared resources, assessing or reaching a consensus on the effectiveness of SMT can be challenging. However, studies measuring the energy efficiency of SMT in conjunction with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations have found that, especially in 45 nm and 32 nm implementations, SMT offers exceptional energy efficiency, even on in-order Atom processors. In modern systems, SMT efficiently capitalizes on concurrency with minimal additional dynamic power consumption. This implies that, even when the performance gains are modest, the power savings can be substantial. Some researchers have demonstrated that the extra threads in SMT can be utilized proactively to populate a shared resource like a cache, thereby enhancing the performance of another single thread. They argue that this illustrates how SMT doesn't merely boost efficiency but can also serve purposes like providing redundancy for error detection and recovery. Nonetheless, in most contemporary scenarios, SMT primarily focuses on mitigating memory latency, improving efficiency, and increasing the computational throughput achieved per unit of hardware utilized.

Dynamic allocations •

Dynamic Memory allocation is a memory management technique where a program can request and return memory while executing.



Virtualized environment, available memory on a physical host is pooled and distributed VM that are running on a host



Best suited for workloads that have regular and predictable fluctuations in memory demands

Dynamic memory allocation is a memory management technique in which a program can request and

© Secbay Press

177 | Page

Secbay Press return memory while it is executing. In a virtualized environment, available memory on a physical host is pooled and distributed to virtual machines (VMs) that are running on that host when needed. If a virtual machine isn’t using all of the memory it’s been allocated, the host may allocate one VM's idle memory to another VM. Dynamic memory allocation is best suited for workloads that have regular and predictable fluctuations in memory demands. Dynamic Memory is a memory allocation feature for Microsoft Hyper-V which pools the host's memory and sends resources to virtual machines (VMs) that need it. When Hyper-V Dynamic Memory is enabled, the VM declares how much memory it needs and the host provides that quantity plus an extra amount to be used as a buffer.Dynamic Memory, which can be compared with Vmware’s memory overcommit feature, rebalances the host's memory in one-second intervals and allows VMs to be prioritized in terms of memory usage.

Oversubscription ● ● ● ●

Sometimes called Overcommit Common Oversubscription in CPU and RAM Oversubscribing RAM more controversial resource CPU and Storage are often overcommitted

Overcommitting RAM is regarded as one of the more contentious strategies for resource allocation. While it's common to overcommit CPU and storage resources, there is often a more cautious approach taken when dealing with RAM overcommitment. Can overcommit CPU resources by assigning more VCPUs than you have physical processors. If you have 8 VMs that each require a single CPU, you could assign them to only four physical CPUs depending on CPU utilization ● ● ● ●

VMWARE - has published guidelines for commit ration calculations. 1:1 to 3:1 - No Problem 3:1: 5:1 - MAY cause performance degradation 6:1 or greater is going to create problems

It is common for VMs to be sized conservatively by the application team. The result is that many VMs have too many vCPUs and vRAM. A larger VM does not necessarily mean a better-performing one. Once VMs are in production, it can be difficult to right-size them as it requires downtime and testing.

3.4.2 Central processing unit (CPU)/ virtual CPU (vCPU) •

vCPU –virtual central processing unit



One or more vCPUs assigned to every VM in a cloud environment



Each one is seen as a single physical CPU core by a VM OS

© Secbay Press

178 | Page

Secbay Press A vCPU stands for virtual central processing unit. One or more vCPUs are assigned to every Virtual Machine (VM) within a cloud environment. Each vCPU is seen as a single physical CPU core by the VM's operating sys

3.4.4 Graphics processing unit (GPU) •

GPU- Graphics Processing Unit



Enables customers to meet the specialized compute requirements of certain advanced applications.



Virtual



Shared- sharing GPUs.



Passthrough- each GPU is dedicated to VM and sharing of VM on server

Cloud GPU is a supplementary Graphics Processing Unit (GPU) option that enables customers to meet the specialist compute requirements of certain advanced applications. Accelerate computational tasks such as machine learning and high-performance computing. Offer a diverse range of GPUs to cater to various performance levels and budget considerations. Provide adaptable pricing structures and customizable machine configurations to enhance workload efficiency. Key features A variety of GPU options NVIDIA K80, P100, P4, T4, V100, and A100 GPUs offer diverse computing solutions to match your specific cost and performance requirements. Adaptable performance Effectively fine-tune the processor, memory, high-performance storage, and accommodate up to 8 GPUs per instance to align with your unique workload. With per-second billing, you only incur charges for the duration of your usage. Full access to Google Cloud advantages Execute GPU workloads on the Google Cloud Platform, granting you access to cutting-edge storage, networking, and data analytics technologies. It's important to note that the pass-through feature for GPUs operates independently without the need for any third-party software drivers within the ESXi hypervisor. When using passthrough mode, each GPU device is dedicated to that VM and there is no sharing of GPUs amongst the VMs on a server. Sep. 11, 2018 The main reasons for using the pass-through approach to exposing GPUs on vSphere are: You are taking your first steps to exposing GPUs in virtual machines to move the end users away from storing their data and executing workloads on physical workstations. There is no need for sharing the GPU among different VMs, because a single application will consume one or more full GPUs (Methods for sharing GPUs will be covered in other blogs) You need to replicate a public cloud instance of an application but using a private cloud setup. An important point to note is that the pass-through option for GPUs works without any third-party software

© Secbay Press

179 | Page

Secbay Press driver being loaded into the ESXi hypervisor. In pass-through mode, every GPU device is exclusively assigned to a specific VM, and there is no sharing of GPUs among multiple VMs on a server. However, it's important to note that certain vSphere features like vMotion, Distributed Resource Scheduling (DRS), and Snapshots are not compatible with this GPU configuration for virtual machines.

Virtual In a cloud computing context, the hypervisor often offers virtual machines with a virtual GPU. It's worth noting that "shared" is a CompTIA exam sub objective. The virtual GPU shares the physical GPU when numerous virtual machines use the same physical system.

Pass-through The hypervisor can be configured to allow pass-through access to the GPU in particular instances. This indicates the virtual machine has direct access to the GPU. In this case, the virtual machine with passthrough access is the only virtual machine on the system that can make use of the GPU. While this option provides greater power and functions to the virtual CPU, it is also the most expensive.

3.4.5 Clock speed/Instructions per cycle (IPC) •

IPC- Instructions Per Cycle



Also called instructions per clock



Processor performance aspect: the number of tasks a CPU can execute in a single clock cycle.



Faster clock speed can complete more cycles in a second.

"In computer architecture, instructions per cycle (IPC), often referred to as instructions per clock, represents a crucial aspect of a processor's performance. It denotes the average number of instructions that a CPU can execute within a single clock cycle. IPC is essentially the reciprocal of cycles per instruction (CPI). IPC, or instructions per cycle/clock (IPC/Clock), provides insights into the CPU's capability during each clock cycle. While clock speed measures how many cycles a CPU can complete in a second, IPC quantifies the number of tasks a CPU can perform in each cycle. For instance, a CPU with a higher clock speed can accomplish more cycles in each second, but a CPU with a greater IPC, even with a lower clock speed, might still be capable of executing more tasks in the same time frame. In essence, the overall speed of a CPU is influenced by a combination of its clock speed, IPC, and the number of cores it possesses."

3.4.6 Hyperconverged ●

HCI- Software Defined unified system



Combines storage, compute, networking, management

Hyper-converged infrastructure (HCI) is a unified and software-defined system that consolidates the components of a conventional data center, encompassing storage, computing, networking, and management. This comprehensive solution leverages software alongside x86 servers, effectively © Secbay Press

180 | Page

Secbay Press supplanting costly, specialized hardware. Hyper converged Architecture •

IT framework that combines storage computing and networking into a single system



Multiple nodes can be clustered to create



SDS- Software Defined Storage



Virtualized Networking

Hyper-converged infrastructure Hyper convergence is an IT framework that combines storage, computing, and networking into a single system to reduce data center complexity and increase scalability. Hyper converged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption. The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyper convergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.

3.4.7 Memory It is critical to know how much memory (RAM) the system will require while installing a virtual machine in the cloud. The amount of memory assigned to a virtual machine affects the cost of running the virtual machine.

Dynamic allocation The concept of dynamic allocation of memory, like the concept of dynamic allocation of CPUs, indicates that RAM is delivered to a virtual machine only when it is requested. For example, you can launch a virtual machine based on an instance with 8 GB of RAM, but the hypervisor does not automatically reserve that amount of RAM for the virtual machine. Because the operating system demands extra memory, the hypervisor allocates it dynamically, up to the maximum allowed by the instance type.

Ballooning Consider the following scenario: a virtual machine is now using 6 GB of RAM. Applications that require this RAM are currently running, however after a few hours, the processing job is finished, and the operating system no longer requires 2 GB of that RAM. The explanation could be that the application terminated or that the application "returned" the RAM to the operating system. In this instance, however, the operating system no longer requires 2 GB of RAM currently. It is important to note that a driver on the virtualized operating system maintains track of how much memory is in use, and the hypervisor can query this driver. Another virtual machine now demands more memory, but the hypervisor determines that there is no free RAM available. The hypervisor identifies the 2 GB that is no longer required in the initial virtual machine

© Secbay Press

181 | Page

Secbay Press after interrogating the other guest operating systems. The hypervisor reclaims this memory through a process known as ballooning and makes it available to the second operating system. Ballooning can benefit cloud vendors because it allows for oversubscription, but it can cause performance concerns when there are many guest virtual machines that occupy the majority of the RAM allowed for their instance type. Physical and Virtual Memory Virtual machines will consume RAM on the host server. The amount of memory required will depend on the number of virtual Machines and how they are individually configured. The memory will reside on the motherboard of the server as physical circuit Cards. Care must be taken by the cloud provider when implementing the servers so that adequate memory is installed on the Server for the VMs that are being hosted. Additional memory should also be installed to accommodate for future growth and for what is needed for the hypervisor. Modern server designs can accommodate ever-increasing memory density. In addition to memory sizes, other parameters considered are access speeds and error correction capabilities. Bursting and Ballooning, How Memory is handled Memory ballooning is a hypervisor feature that permits the hypervisor to recover unutilized memory from a virtual machine (VM) operating atop it and subsequently assign this reclaimed memory for other purposes. This capability enables the hypervisor to effectively optimize the installed RAM within the system by recycling unused memory resources from VMs. Keep in mind that the hypervisor sits between the VMs and the physical hardware. Also, the operating systems on the virtual Machines think they are talking to the actual hardware in the chassis and not a piece of hypervisor software. The hypervisor can take advantage of this by restricting what hardware resources each VM can access and balance those resources between the virtual machines running on top of the hypervisor. For example, while a VM may have 4GB of memory allocated to it by the hypervisor, the VM sees that as real memory on a Server. The hypervisor can use the ballooning process to reclaim 2GB of memory from this VM for use elsewhere, and the VM.

3.5 Given a scenario, perform cloud migrations. 3.5.1 Physical to virtual (P2V) •

Physical to Virtual (P2V)- Takes a server running an OS and apps and migrating to VM running on top of hypervisor.



May require reinstalling the OS, application, and files from scratch.



Many companies offer utilities like VMware vCenter Converter.

When undertaking a migration, you have several options. A physical-to-virtual (P2V) migration means taking a server that is running an operating system and applications and then migrating it to a VM running on top of a hypervisor. A P2V migration may require reinstalling the operating system, application, and data files onto a new VM from scratch. Many companies offer software utilities, such as VMware vCenter Converter and Microsoft's Virtual Machine Manager that can perform the conversion. Several third-party © Secbay Press

182 | Page

Secbay Press software companies and cloud providers offer fully automated P2V utilities.

3.5.2 Virtual to virtual (V2V) •

Virtual to Virtual (V2V) • •

MORE straightforward

Clone existing VM and install image at hosting center.

Virtual-to-virtual (V2V) migrations are much more straightforward than a P2V migration. Generally, a V2V migration involves cloning the existing VM and installing that image at the cloud provider's hosting center. Because the host operating system is already virtualized, a V2V migration is frequently easier than a P2V migration. There are also certain virtualization standards that can help with this procedure. The Open Virtualization Format (OVF), for example, is a standard that is not connected to any one hypervisor. It establishes a standard for defining (packing) virtual appliances (software that is run in a virtual environment). OVF is supported by many notable hypervisor vendors, including VMware, VirtualBox, Red Hat Enterprise Virtualization, and Oracle VM.

3.5.3 Cloud-to-cloud migrations •

Hybrid Cloud- multiple platforms



Cloud to Cloud migrations are moving from one cloud provider (or cloud application) to another cloud provider)



There is much research that needs to be done before being able to perform the migration.



Cloud vendors may have different standards between technologies like IaaS vs PaaS etc.



If a cloud provider offers an unique application, other vendors may not provide the same service

In a hybrid cloud, more than one cloud service is utilized. Given that all the cloud providers' data centers are virtualized operations, is it possible to move workloads between them? The answer is often a definite "maybe" or "it depends"! This requires some background research and engineering to pull off. Where there is commonality between service providers, there are also proprietary implementations that may prevent a simple migration between cloud providers. The higher up the stack you go, from IaaS to PaaS to SaaS, the more difficult it will be to migrate. With IaaS, most of the cloud operations are under your direct control, which gives you the most flexibility to migrate. However, if the cloud provider controls the application, you may not have many options to migrate. There are companies and software applications that specialize in migrations between cloud providers and that make sure all the formats and configurations are compatible between the providers. It's also important to review the financials involved in a migration to make sure you do not have any long-term contractual commitments with the cloud provider you are migrating away from and are not forced to pay for stranded resources.

© Secbay Press

183 | Page

Secbay Press Vendor lock-in •

Easy to migrate to one vendor, hard to leave due various dependencies.



May require substantial costs, technical work to move to another provider



Use open standards and build portable applications



Protect Business Critical Workloads

Vendor lock-in refers to a situation where a client is dependent on a specific cloud service provider and has no ability to move between vendors because there are no standardized protocols, APIs, data structures (schema), and/or service models. The challenge of vendor lock-in in cloud computing arises when customers find themselves reliant (or locked-in) on a single cloud provider's technology implementation, making it difficult for them to transition to a different vendor in the future without incurring significant expenses, encountering legal restrictions, or dealing with technical incompatibilities.

PaaS or SaaS migrations 1. Rehost the applications on Infrastructure as a Service (IaaS) 2. Refactor/Re-Architect the applications for Platform as a Service (PaaS) 3. Revise the applications for IaaS or PaaS 4. Rebuild the applications on PaaS 5. Replace the applications with Software as a Service (SaaS) It is noteworthy that none of the aforementioned cloud migration str SaaS, PaaS, or IaaS—Monitoring Is Always King Even when you give up all backend control with a SaaS migration, you must make sure you’re keeping up with end-user experience monitoring. When problems occur, users will come to you to fix them (not your SaaS provider). By consistently overseeing your cloud-based applications, regardless of whether you're utilizing SaaS, PaaS, or IaaS, you can adopt a proactive stance when addressing various IT issues. As our dependence on cloud computing grows, ensuring business performance will increasingly hinge on your capacity to monitor the end-user experience. To get various benefits of cloud migration, you need to move all your applications to the cloud. And, to ensure a smooth transition of your applications from your in-house physical servers to the cloud infrastructure, you need to have a well-planned cloud migration strategy in place. When CIOs opt for cloud migration, the cloud architects or the internal IT team of the organizations encounter numerous hurdles when it comes to selecting an appropriate strategy for migrating their applications to the cloud. This involves taking into account the specific business needs, as well as the IT goals and objectives of their companies, all while ensuring that the migration process doesn't disrupt company operations and © Secbay Press

184 | Page

Secbay Press minimizes associated risks. This article outlines 5 important cloud migration strategies as mentioned by Richard Watson, Research Director of Gartner, at the Gartner Application Architecture, Development & Integration Summit. It's essential to emphasize that none of the cloud migration strategies outlined by Watson provides a one-size-fits-all solution. Therefore, the cloud architects within a company's in-house IT team must assess and comprehend their cloud migration process from multiple angles and criteria. This approach is crucial for minimizing risks and downtime while harnessing the full potential of the cloud to align with their companies' business objectives and cost constraints. Here's a summary of the five cloud migration strategies proposed by Watson: 1. Rehost Your Applications on Infrastructure as a Service (IaaS) For companies seeking a swift transition to the cloud, opting for the Infrastructure as a Service (IaaS) platform represents the most favorable cloud migration strategy. This approach involves the straightforward relocation of applications to an alternative hardware environment by modifying their infrastructure setup. As previously mentioned, the primary advantage of this cloud migration strategy lies in its ability to facilitate a rapid transition to the cloud without necessitating architectural changes. However, it's worth noting that this advantage comes at the cost of missing out on the scalability benefits provided by your cloud computing services provider. 2. Refactor Your Applications for Platform as a Service (PaaS) Another cloud migration strategy that companies can choose to move their applications to the cloud is refactoring or re-architecting their applications for Platform as a Service (PaaS). This strategy is suitable for those companies who want to reuse the languages, frameworks, and containers they have invested in while leveraging the code which they consider strategic. The disadvantage of this strategy includes missing capabilities, transitive risk, and framework lock-in. Hence, while choosing this strategy, the companies should be aware of the fact that some of the capabilities their developers depend on could be missing from the PaaS offering as its market is still in its early stage. 3. Revise Your Applications for IaaS or PaaS Revising the applications for IaaS or PaaS is suitable for those companies who want to optimize their applications to leverage the cloud features of the provider’s infrastructure. The drawback of this strategy is that it not only consumes a lot of time but also requires upfront expenses to first revise (modify or extent) the existing code base of their applications to support legacy modernization requirements and then use Re host or Refactor (Re-Architect) cloud migration strategy to move the applications to the cloud. 4. Rebuild Your Applications on PaaS When companies opt to reconstruct their applications on a Platform as a Service (PaaS) infrastructure, they must discard their existing application's code and frameworks and then undertake a process of re-architecting or refactoring to migrate them to the cloud. The advantage of this cloud migration strategy lies in its capacity to enable companies to fully leverage the diverse innovative features of the cloud platform provided by the service provider. Furthermore, this approach aids companies in enhancing their developers' productivity, as they gain access to tools that facilitate © Secbay Press

185 | Page

Secbay Press customization of application templates, data models, metadata-driven engines, and communities that provide pre-built components. However, the primary drawback of this cloud migration strategy is that it compels companies to potentially switch or even relinquish some or all of their application assets in the event of pricing or technical alterations by the provider or breaches of service level agreements (SLAs), which may be unacceptable to them. 5. Replace Your Application with Software as a Service (SaaS) The strategy of replacing the applications with Software as a Service (SaaS) is suitable for those companies who want to discard their existing application or a set of their applications to use better commercial software delivered as a service. The advantage of this cloud migration strategy is that it allows the companies to avoid investment in mobilizing their development team when the requirements of their business operations change very quickly. Drawbacks associated with this strategy encompass inconsistencies in data semantics, challenges related to data access, and the risk of vendor lock-in. Important Tip for Choosing a Cloud Migration Strategy When you decide to migrate your applications to the cloud, make sure that your decision is not taken in a rush to join the cloud race. Instead, you should go for cloud migration only if it helps in the optimization of the following: 1. Your applications performance and maintenance 2. The cost of your running or maintaining your in-house IT infrastructure Access control lists (ACLs) An access control list is used to establish who has access to a certain resource in your cloud environment. A network access control list (NACL), for example, can include a list of IP addresses or IP networks that are allowed to access a VPC. NACL rules are linked to network ports (or protocols) and may allow or prevent access. There are several factors to consider while migrating from one cloud to another, including the following: ● ● ●

Is the new vendor offering an ACL solution that is functionally equivalent to the previous vendor's ACL solution? Is it possible to move the present ACL's rules to the new vendor's ACL? Is the new vendor's ACL solution limited in any way that could lead to security issues?

Firewalls Firewalls perform a similar role to ACLs, but at a much greater level of capability. There are also several sorts of firewalls that can be used in the cloud. A web application firewall (WAF), for example, is a Layer 7 firewall that is designed to protect your cloud applications from web-based vulnerabilities. There are also a variety of cloud-based firewall appliances available, many of which may be installed across several cloud provider settings. Migrating from one cloud vendor to another might be difficult due to the complexity of firewall rules. The © Secbay Press

186 | Page

Secbay Press large range of firewall solutions also limits the possibility of an automated solution transferring a firewall from one cloud environment to another.

3.5.4 Storage migrations Because storage migrations sometimes include enormous volumes of data, many of which may contain proprietary information, transferring storage solutions from on-premises to the cloud presents certain issues. Take the following considerations into account while planning for storage migration: • • • •



Data integrity: There should be a technique in place to ensure that the original data is not corrupted during the migration process. Monitoring: To verify that all data is moved, you must employ a monitoring or logging system. Security: It will be vital for most data to ensure that it is encrypted during transport. Bandwidth: Even with plenty of internet capacity, transferring vast volumes of data might take a long time. Because this transfer may influence day-to-day business operations, bandwidth throttling, or data transfer scheduling may be implemented. Some manufacturers additionally offer other data transfer methods, such as delivering data via big storage devices or out-of-band data securely connected to the cloud. Cost: Transferring data into a cloud storage system may incur costs depending on the storage option.

Block Block storage solutions on-premises are those that are stored on disk volumes. When moving this type of data, you should think about the cloud storage option you want to use. Although it's rational to store your data in a cloud block storage solution, you should also contemplate the option of storing it in a file storage solution or even an object storage solution. In terms of which method would be the simplest, a block-to-block transfer would normally necessitate the least amount of effort. This method also allows you to keep the original storage solution's file system structure (such as NTFS, ext4, and FAT). Maintaining the same structure may be necessary if the data on the current file systems is protected by file system features such as permissions. A block-to-file migration method is often not much more complicated than a block-to-block approach, although transferring individual files may take longer. Transferring block data to object data, on the other hand, will be more difficult unless you use a tool offered by the cloud vendor. It is crucial to remember that elements such as file system permissions and file system structure (subdirectories or subfolder trees) are frequently lost during these types of migrations.

File Because file storage solutions are extensively utilized on-premises and are network-based, migrating this data to a cloud solution is often one of the simplest migrations. Many cloud companies also give tools to help with the migration process. Normally, a file-to-block transfer is not a realistic or preferred solution. However, a file-to-object transfer is worthwhile to investigate because object storage solutions may provide advantages that file storage © Secbay Press

187 | Page

Secbay Press does not (ACLs, capability to share via a URL, and so on).

Object Object-based solutions on-premises, such as MinIO, Ceph, and LeoFS, can also be transferred to the cloud. The process of transferring this data to the cloud may appear straightforward, but object information, security, and access methods may complicate matters. Performing a Server Migration With the cloud being a predominantly virtualized environment, you should understand how to take your existing servers and applications running directly on their own hardware and migrate them to the virtualized world of hypervisors and VMs. In this section, you will learn about this process. We will start by showing you how to gather background information. Then you will learn about creating virtual images and address the issues of storage and how you will migrate the data for the application. Careful and detailed planning is critical to a successful migration of operating systems off a physical server and onto a hypervisor as a virtual server. To establish what will be required as a virtual server, you must collect information such as CPU, memory, storage, and I/O needs from the existing server. You can do so by establishing a baseline of the server to be migrated. The monitoring and data collection process will vary as to what types of applications the servers are running and the time-of day CPU, disk, or I/O workload increases that may need to be taken into consideration. Once the baseline requirements have been collected, you can use that valuable information to scale and configure the new virtual server's hardware profile. When you are migrating from a physical to a virtual server environment, some downtime may be required. You can schedule standard data center maintenance windows for the changes to be implemented. Other changes such as IP addressing, VLAN, and VSAN configurations and Domain Name System (DNS) server settings may need to be changed at the same time that the server is being migrated from a physical to a virtual server. It is common to create an implementation document that covers all the required steps to be taken during the change and how to back those changes out if the change goes bad and needs to be Migrating Your Storage Data A server's data storage volumes can be migrated separately from or as part of a server migration. Storage can be moved from a Fiber Channel SAN to other environments, such as network-attached or directattached storage models. Data transferred between storage devices also allows for disaster recovery scenarios. The cloud service provider will work with you to make the transition to the cloud. This will vary based on the cloud provider's storage offerings and infrastructure. When planning the storage migration, consider the amount of bandwidth and how long it will take to upload the storage data from your internal data center to the cloud. For example, if you are accessing the cloud over the Internet with a relatively slow connection, determine how long it will take to transmit the storage data or whether other options are available to transfer the data. With the large amount of storage data residing in corporate data centers, it may not be feasible to upload © Secbay Press

188 | Page

Secbay Press the storage data to your new cloud-based storage solution. With petabyte-scale storage arrays, it could literally take years to transfer the data over a network. Many workarounds have been developed including appliances that can be shipped to the private data center and directly connected to the storage network for a local transfer of stored data. Then the appliance can be shipped to the cloud service provider and the data directly transferred to the cloud storage arrays. There are options to use these appliances as local data stores in the private data center with them performing the upload to the cloud as a background process. Each provider has unique offers to mitigate the delays encountered when transferring such large amounts of data including. Shipping-container-sized storage systems are pulled by large semi-tractors between the remote storage sites and the cloud data centers.

3.5.5 Database migrations ●

Database migration- many cloud providers have Database Migration tools.



Cloud can be an effective migration strategy.



Database replication in the cloud

Making migrations to Cloud SQL simple. Now available in preview for MySQL, with limited access to PostgreSQL, and SQL Server coming soon. •

Get started Migrate databases to Cloud SQL from on-premises, Google Compute Engine, and other clouds



Replicate data continuously for minimal downtime migrations.



Server less and easy to setup



Available at no additional charge

Commence your migration process effortlessly with a unified, integrated migration interface. Streamline the intricacies of migration to expedite your transition to Cloud SQL, ensuring a swift setup. Enjoy continuous data replication from source to destination for minimal downtime migration. Server less migrations eliminate surprises and are highly-performant at scale. Fast track to Cloud SQL Get all the operational benefits of fully managed MySQL or PostgreSQL, plus the enterprise availability, stability, and security that customers trust for their most mission-critical workloads. Replicas serve as backup data copies that can be stored either in a local or remote location, functioning as alternative data repositories apart from your primary production operations. For instance, with numerous widely used databases, it is possible to generate one or more read replicas that can be situated in local or distant data centers. These replicas receive real-time updates from the master source, and the database can utilize them instead of relying solely on the master replica. This arrangement enables the database to scale and perform exceptionally well in terms of read operations. Other applications such as caching and content delivery services rely on replicas stored locally that were © Secbay Press

189 | Page

Secbay Press downloaded from a distant source location for enhanced performance. Operating systems, storage management software, applications, and virtualization management systems offer storage replication where data that is stored in a local volume, LUN, or array is automatically replicated to another system for availability and redundancy purposes. When a file is stored in the primary storage system, the replication software will automatically store the same file in another location, or replica.

Cross-service migrations •

Relational Databases



Non-Relational Databases

Database Migration Service supports multiple secure, private connectivity methods to protect your data in-transit. Upon completion of the migration process, data is automatically encrypted, and Cloud SQL offers multiple layers of security to fulfill even the most rigorous security standards. Application profiling technologies assist customers in gaining insight into every aspect of the migration process, starting with the initial mapping of legacy systems and extending to performance monitoring in the cloud following the migration's conclusion. The application profiling technology exhibits the following characteristics: •

Application instrumentation end-to-end: Allows instrumentation of common and customer applications end-to-end, following business transactions from user device to application, infrastructure, and database. A business transaction is made up of the required services in a customer’s environment that are called upon to fulfill and deliver a response to a user-initiated request.



Profiling Application Performance: Offers performance profiling both before and after migration, encompassing baselines, peaks, and performance trends. Notable application performance metrics include latency at various application processing components and the response time for client API calls to the application.



Discovery of business transactions: Traces business transactions across all applications in the dependency chain for a full view of application usage and performance profiling prior to migration.



Unified monitoring: Provides a single-pane-of-glass view of application performance, from user device to application, infrastructure, and database in real-time.



Cross-environment monitoring: Provides monitoring of components and resources running on-premises and in the cloud at the same time.



Validation during migration: Simulates and measures user experience on AWS during migration by injecting synthetic application traffic or emulating user behaviors and monitoring performance of synthetic applications and detecting issues quickly.



Validation post migration: Compares application performance and user experience in the

© Secbay Press

190 | Page

Secbay Press source environment that was previously baselined, as well as the AWS environment postmigration. Let’s look at how Datadog’s solution provides application profiling functions for workload migration to AWS. Data dog is a hosted software-as-a-service (SaaS) platform for cloud-scale monitoring and analytics. With infrastructure monitoring, application performance monitoring, log management, network performance monitoring, and user-experience testing, Data dog provides a comprehensive view of application performance in a single pane of glass.

Relational A relational database stores data in tables that are made up of columns and rows. It is comparable to a spreadsheet in many ways, but there are some major distinctions. Each row's data is referred to as a record. A column with a unique value must exist in a relational database. This value is referred to as the key. Structured Query Language (SQL) databases are another name for relational databases. SQL is not a database type, but rather a language for creating, modifying, and querying data in a relational database. The relational component of relational databases is the feature that allows a connection (relation) to be established between distinct tables in the database. Here are some of the benefits of relational databases: ● ● ● ● ●

They do effectively with organized data. Query times are usually quite short. SQL functionality is available in a wide range of software solutions. The database format (rows and columns) is simple for people to grasp. There are numerous relational database management system (RDBMS) solutions available, both on-premises and in the cloud.

Non-relational Non-relational databases, commonly known as NoSQL databases, store data in a nontraditional way. Non-relational databases are classified into four types: Document: This database is used to store data in the form of documents, which are often represented in a data format such as JSON, XML, or YAML. Columnar databases are comparable to relational databases, but there are some major differences. In a relational database, for example, each record must have a value for each column. Non-relational databases are exempt from this requirement. In this sort of database, a key is associated with a value. Complex linkages between data sets can be built in this form of database.

© Secbay Press

191 | Page

Secbay Press

Practice Questions QUESTION NO: 1 AWS S3 (Simple Storage Service), Google Cloud Storage, and IBM Cloud Object Storage are all examples of ____ storage. A. object B. file C. block D. secured

QUESTION NO: 2 Salesforce, Dropbox, Gmail, WebEx, and DocuSign are some examples of __________. A. SaaS B. IaaS C. STaaS D. PaaS

QUESTION NO: 3 When you deploy a virtual machine or another cloud resource, such as a database, you usually have a lot of questions to answer. A(n) ____ is a file that is used to automatically answer these questions. A. results B. template C. config D. answers

© Secbay Press

192 | Page

Secbay Press QUESTION NO: 4 Which of the following devices does not use the standard cloud template file format? A. XLS B. XML C. YAML D. JSON

QUESTION NO: 5 Examples of ____ storage include AWS EBS (Elastic Block Storage), Microsoft Azure Blob storage, and Google Cloud Persistent Disks. A. secured B. file C. object D. block

QUESTION NO: 6 A cloud-based service is typically paid for in advance with a _____-based service. A. contract B. static C. subscription D. fixed

QUESTION NO: 7 The term bucket is used to organize data in _____-based storage solutions. A. object B. bit C. block D. file © Secbay Press

193 | Page

Secbay Press QUESTION NO: 8 Which of the following are valid cloud computing tenant types? A. Dual-tenant B. Multitenant C. Single-tenant D. Remote-tenant QUESTION NO: 9 ____ is a protocol that allows communication between network devices to block devices. A. CIFS B. iSCSI C. NFS D. TONE QUESTION NO: 10 A storage device's primary speed calculation is known as ______. A. SPD B. IOPS C. RWRO D. PEER QUESTION NO: 11 _____is a distributed file system protocol that was developed in 1984 and is a popular way for UNIX and Linux systems to share file systems across networks. A. iSCSI B. NFS C. CIFS D. TONE QUESTION NO: 12 A ____ is a network-connected storage device that provides high-speed access to block storage. A. SNA B. SAN C. NIB D. NAW

© Secbay Press

194 | Page

Secbay Press QUESTION NO: 13 RAID ____ does not provide redundancy, but rather expands available storage by combining multiple hard disks (or partitions) into a single device. A. 1 B. 5 C. 0 D. 10 QUESTION NO: 14 You use your own on-premises storage devices with _____, but if they become full, the available space is supplemented with cloud-based storage. A. long-term storage B. hybrid storage C. flash storage D. spinning disks QUESTION NO: 15 Any storage that is placed on an SSD is referred to as _____. A. Flash storage B. Long-term storage C. Spinning disks D. Hybrid storage QUESTION NO: 16 A SAN is a network-connected storage device that provides high-speed access to ____ storage. A. local B. block C. file D. object QUESTION NO: 17 Which of the following protocols are compatible with Fibre Channel (FC)? Select 2 answers A. FibreC B. NVRAM C. NVMe-oF D. FCP

© Secbay Press

195 | Page

Secbay Press QUESTION NO: 18 A NAS is a network-connected storage device that provides high-speed access to ____ storage. A. local B. file C. object D. block QUESTION NO: 19 Which stratum number would result in the most accurate clock value for NTP? A. 9 B. 1 C. 0 D. 15 QUESTION NO: 20 What DNS term best describes the process of converting an IP address into a domain name? A. FQDN B. Forward lookup C. DNS forwarder D. Reverse lookup QUESTION NO: 21 The DNS structure's top level is simply referred to as _________. A. name server B. FQDN C. authoritative name server D. dot QUESTION NO: 22 Which of the following components is commonly found in a VPC? A. Gateway B. Storage bucket C. Routing table D. Virtual machine

© Secbay Press

196 | Page

Secbay Press QUESTION NO: 23 Which of the following terms describes IP subnetting methods? A. VLSM B. CIDR C. AARP D. DSLM QUESTION NO: 24 What DNS term best describes the process of converting a domain name into an IP address? A. Forward lookup B. DNS forwarder C. Reverse lookup D. FQDN QUESTION NO: 25 A(n) ____ is a function that returns results based on data stored locally on the system. A. dot B. authoritative name server C. name server D. FQDN

© Secbay Press

197 | Page

Secbay Press

Answers to Practice Questions 1. Answer: A Explanation: Object storage is a feature that allows objects (unstructured data such as emails, videos, graphics, text, or any other type of data) to be stored in the cloud. Block storage is typically raw storage space that requires the installation of a file system before it can be used. File storage functions similarly to a network file system (also referred to as a NAS, network attached storage). Secured is not an acceptable cloud storage type.

2. Answer: A Explanation: The entire application is hosted and maintained by the cloud vendor with a SaaS software solution. Salesforce, Dropbox, Gmail, WebEx, and DocuSign are all examples of SaaS.

3. Answer: B Explanation: When you deploy a virtual machine or another cloud resource, such as a database, you usually have a lot of questions to answer. A template is a file that is used to automatically answer these questions. The other options are invalid terms.

4. Answer: A Explanation: Cloud templates are commonly stored in a data structure. Almost always, the data format is an industry standard. JSON, YAML, and XML are the most common template formats. XLS is a spreadsheet format that is not suitable for templating.

5. Answer: D Explanation: Block storage is typically raw storage space that requires the installation of a file system before it can be used. File storage functions similarly to a network file system (also referred to as a NAS, network-attached storage). Object storage is a feature that allows objects (unstructured data such as emails, videos, graphics, text, or any other type of data) to be stored in the cloud. The final answer is not a valid type of cloud storage.

6. Answer: C Explanation: A subscription-based service typically requires an upfront payment (most subscriptions are for one year) to gain access to a cloud-based service. Compare this to a pay-as-you-go model, where you only pay for the service as you use it. The other options are not viable service models.

© Secbay Press

198 | Page

Secbay Press

7. Answer: A Explanation: The term bucket is used to organize objects in object-based storage solutions.

8. Answer: B,C Explanation: A single-tenant solution is one in which a resource or infrastructure serves only one customer. A multitenant solution is one in which a single resource or infrastructure serves multiple customers. The remaining options are not valid tenant types.

9. Answer: B Explanation: iSCSI is a network protocol that allows communication between block devices. NFS is a distributed file system protocol that was developed in 1984 and is a popular way for UNIX and Linux systems to share file systems across networks. Microsoft developed the CIFS distributed file system protocol. TONE is not a network protocol.

10. Answer: B Explanation: The IOPS, or input/output operations per second, is the primary speed calculation of a storage device. This is a value that you must consider when deciding which underlying storage type to use for your storage resource. The other options are invalid storage terms.

11. Answer: B Explanation: NFS is a distributed file system protocol that was developed in 1984 and is a popular way for UNIX and Linux systems to share file systems across networks. Microsoft developed the CIFS distributed file system protocol. iSCSI is a network protocol that allows communication between block devices. TONE is not a network protocol.

12. Answer: B Explanation: A SAN (storage-area network) is a network-connected storage device that provides highspeed access to block storage. The remaining options are ineffective storage devices.

© Secbay Press

199 | Page

Secbay Press 13. Answer: C Explanation: RAID 0 does not provide redundancy, but rather expands available storage by combining multiple hard disks (or partitions) into a single device. RAID 1, also known as mirroring, makes two or more disk drives appear to be a single storage device. With the exception of one device that stores parity data, RAID 5 writes different data to each physical storage device. RAID 10, also known as RAID 1+0, combines the benefits of RAID 1 and RAID 0.

14. Answer: B Explanation: You use your own on-premises storage devices with hybrid storage, but if they become full, the available space is supplemented with cloud-based storage. Any storage that is placed on an SSD is referred to as flash storage (solid-state drive). Spinning disks, also known as magnetic drives, are traditional hard drives that store data on rapidly spinning platters coated with a magnetic material. Tape storage or cold storage are two common types of long-term storage.

15. Answer: A Explanation: Any storage that is placed on an SSD is referred to as flash storage (solid-state drive). When you use hybrid storage, you use your own on-premises storage devices, but when they fill up, the available space is supplemented with cloud-based storage. Spinning disks, also known as magnetic drives, are traditional hard drives that store data on rapidly spinning platters coated with a magnetic material. Tape storage or cold storage are two common types of long-term storage.

16. Answer: B Explanation: A SAN (storage-area network) is a network-connected storage device that provides highspeed access to block storage. SAN devices do not support object or file storage. Local is incorrect because a SAN is a network-accessible device.

17. Answer: C,D Explanation: If iSCSI is implemented with FC, the protocol used is known as FCP (Fibre Channel Protocol). FC can also be used in conjunction with another protocol: NVMe (Non-Volatile Memory express). This is known as NVMe-oF. The other answers are not Fibre Channel terms.

© Secbay Press

200 | Page

Secbay Press 18. Answer: B Explanation: A network-attached storage (NAS) device provides high-speed access to file storage over a network. SAN devices do not support object or block storage. Local is incorrect because a NAS is a network-accessible device.

19. Answer: C Explanation: NTP servers are classified based on their accuracy. This is accomplished by assigning the server a "stratum" value. This is a numeric value ranging from 0 to 15, with the lower the value indicating that the clock is more accurate.

20. Answer: B Explanation: The process of converting an IP address into a domain name is known as forward lookup. The process of converting a domain name into an IP address is known as reverse lookup. A DNS forwarder is a DNS server that accepts DNS queries from an internal network and forwards them to an external DNS server. An FQDN, or fully qualified domain name, is a host's domain name that begins at the top of the DNS structure.

21. Answer: D Explanation: The "." character represents the top level of the DNS structure, which is simply referred to as "dot." A FQDN, or fully qualified domain name, is a host's domain name that begins at the top of the DNS structure. A name server is a computer system that handles DNS client requests. A name server that is authoritative returns results based on information stored locally on the system.

22. Answer: A,C Explanation: A VPC typically includes subnets, routing tables, gateways, firewalls, and endpoints. While a virtual machine can exist within a VPC network, it is not a VPC component. A storage bucket is not normally associated with a VPC.

23. Answer: A,B Explanation: IPv4 subnetting techniques include VLSM (Variable Length Subnet Mask) and CIDR (Classless Inter-Domain Routing). The other answers are not valid intellectual property terms.

© Secbay Press

201 | Page

Secbay Press 24. Answer: A Explanation: The process of converting a domain name into an IP address is known as forward lookup. The process of converting an IP address into a domain name is known as reverse lookup. A DNS forwarder is a DNS server that accepts DNS queries from an internal network and forwards them to an external DNS server. A FQDN, or fully qualified domain name, is a host's domain name that begins at the top of the DNS structure.

25. Answer: B Explanation: A name server that is authoritative returns results based on information stored locally on the system. A FQDN (fully qualified domain name) is a host's domain name that begins at the top of the DNS structure. The "." character represents the very top level of the DNS structure, which is simply referred to as "dot." A name server is a computer system that handles DNS client requests.

© Secbay Press

202 | Page

Secbay Press

DOMAIN - 4 Operations and Support 4.1 In a given scenario, incorporate components into a cloud-based solution. 4.1.1 Logging •

Logging- detailed transaction records generated by all elements in the cloud



Collectors



SNMP- Simple Network Management Protocol



Syslog



Anytime a user logs into a system or accesses storage, log can be generated for later evaluation.



It’s important to protect log data by placing files in a highly secure and backup service.

Logging is the detailed transaction records generated by all elements in the cloud for transactions and interactions of a device or system. For example, anytime a user logs into a system or accesses a storage system, a log can be generated and stored for later evaluation; in addition, changes made to the system must be recorded, and all the required steps to maintain compliance must be retained. It is important to protect your log data by placing the files in a highly secure and backed-up storage service that offers encryption and versioning. Each regulatory requirement will specify a detailed listing of Collectors Log and events are constantly being generated by many objects and services in your cloud fleet of devices. It should be apparent that there needs to be some way to collect, store, and analyze this data. In Chapter 6, logging was discussed, and you learned that event information is sent to syslog or logging servers where the data can be analyzed and archived. By integrating cloud logging into a centralized logging collector, monitoring systems can see the whole complete picture of your remote public and local private cloud operations in one place. Simple Network Management Protocol (SNMP) The Simple Network Management Protocol enables devices to send log messages to a centralized collection point. SNMP is made up of two main components: SNMP agents and the SNMP management. An SNMP agent's job is to gather log messages from devices and send them to the SNMP management. The SNMP manager keeps these logs in a database, allows you to query them, and may be set up to send alerts to end users.

© Secbay Press

203 | Page

Secbay Press Syslog The Syslog protocol allows devices (as well as operating systems such as Linux) to transmit log messages to a Syslog server (that is, a Syslog collector). Queries on the Syslog server can be performed to detect system faults, but the server can also be set to provide alerts to end users. It should be noted that there is no defined rule that specifies what type of message is linked with a severity level. This is left to the developers who construct the device's alarm messages. The functionality of Syslog and SNMP are similar, although there are a few important differences: There are certain security features, including authentication, if you use SNMP version 3. Security features are lacking in Syslog. SNMP can collect messages by polling devices or by having devices deliver messages in response to a trigger (called a trap). The device is used by Syslog to send log messages. SNMP also has some control capability, such as the ability to perform actions on the device in response to a log message. Syslog is intended to aid in troubleshooting by offering log message monitoring.

Analysis As you might expect, the volume of logs created on a busy network can be startling. It is difficult for a human to keep track of all of the log entries. As a result, most firms employ log data analysis technologies. A tool that examines log data will often employ a baseline to evaluate normal network device operation. The analytic tool can detect whether there is a problem on the network or with a specific network device by evaluating log entries. If the tool detects a potential problem, an alarm should be given to a human to examine further. In some circumstances, the tool may also take active action to resolve the issue. There are plenty of utilities available for both SNMP and Syslog. The following are a few examples of popular SNMP analytic tools: ● ● ● ●

Network Performance Monitor by Solar Winds Logic Monitor Site24x7 Network Monitor Spice works

Severity categorization One feature of Syslog is the ability of devices to specify the severity of a log message. The following severities are acceptable: ● ● ● ● ● ● ● ●

0—Emergency 1—Alert 2—Critical 3—Error 4—Warning 5—Notice 6—Informational 7—Debug

© Secbay Press

204 | Page

Secbay Press Audits A log audit (also known as a log trail) is a collection of logs that offer a chronological record of events. This data is extremely helpful in determining which stages may have resulted in a problem or error.

Types Access/authentication System Application Logging systems can be surprisingly feature-rich given all the capabilities that have been added over the years. Policies, or rules, can be created to forward event information in a fan-out design to systems that may benefit from the data. A policy can be created for event families, or types can be forwarded, or disseminated, to systems that require that data. For example, a big data application may be interested in knowing which geographical locations around the world access which specific files on your website at specific dates and times. A dissemination policy can be created to forward the relative logging information to the big data cluster for analysis. Regulatory and Law Changes Of course, there is always going to be changes on the legal and regulatory landscape, this is an important area to keep track of as changes can have a direct and long lasting impact on your operations. Also, if you are a regulated entity, you may be audited for compliance, and if you fail, it could have a devastating effect on your operations. Depending on the changes, there may be no effect to your operations, but the cloud provider may need to make changes in their infrastructure or you may be responsible for the implementation of the change. It is important to coordinate and track changes of this type as required for business continuity.

Automation Log automation is the process of performing an action automatically based on a certain log item. For instance, if someone tries to log in as the administrator on one of your systems and fails, you may want to automate the process of sending a text alert to your IT personnel.

Trending In terms of logging, trending is the process of determining what is going on in your cloud environment by examining logs over time. For example, if you are afraid that you may run out of space in a cloud storage resource, you can look at daily reports of space use over a period (such as the previous 30 days) to see if you need to upgrade to a larger storage device.

© Secbay Press

205 | Page

Secbay Press Anomalies Once you determine what is considered to be your baseline during normal operations, you can use that as a reference to determine what is considered to be an anomaly or a system that reports metrics that are either above or below your expectations. By trending the data, you are able to spot potential issues in your deployment or the data can be used in analytics systems to predict future requirements and capacity needs. Being able to identify anomalies using cloud monitoring systems has many benefits. For example, you are able to use notification systems to alert systems and users of a critical issue, and automation and orchestration systems can act on reported anomalies and correct the issue, sometimes before they have an impact on service“s”

4.1.2 Monitoring The distinction between logging and monitoring can be obscured at times since the instrument used for logging frequently also serves as a monitoring tool. Consider logging to be the recording of what occurs, but monitoring gives an understanding of how your environment is functioning and can provide you with warnings depending on criteria that you establish. This part will go over the many monitoring elements that you should be aware of, such as the meaning of baselines, thresholds, and tagging.

Baselines A baseline is a well-established standard. Baselines are used to determine whether or not a resource has an issue. Baselines are frequently formed by evaluating data from logs over time. Future activity is tracked to see if a resource is operating within the parameters of the baseline.

Thresholds Thresholds are used to identify when an alert should be generated because a resource attribute has reached a maximum or minimum value. For example, the speed of a web server running on a virtual machine may be critical to ensuring that the web server can react to client requests. Setting thresholds for critical indications such as CPU utilization, free memory, and network throughput might help to monitor this performance. If any of these thresholds are exceeded by the web server, an alarm will be issued, or an auto-scaling event (reaching another web server) will be implemented. Another example may be a cost threshold for a cloud resource, with an alert delivered to your email if it exceeds a predetermined limit.

Tagging Tagging is a feature that applies metadata to cloud resources. This metadata usually consists of keyvalue pairs. For instance, a key may be "department," and the value might be "sales." There are numerous benefits to tagging materials. For example, you could use tags to group several virtual machines in order to perform actions on all of them. For example, if you wanted to disable a group of virtual machines used to test new database software, you could establish a tag called "purpose" with the value "database test" for all of these resources. Then, to shut down all of these virtual machines, you

© Secbay Press

206 | Page

Secbay Press might have the cloud management tool shut down all virtual machines with the tag "purpose" with a value of "database test." Tagging can also be used to track resources. You could want to define a threshold rule that applies to all web servers in the organization. You can establish a monitoring rule that applies to any resource with the tag value by applying the same tag value to all of these web servers. This would also include any new resources created after the monitoring rule is applied, assuming they have the correct tag value.

Log scrubbing Logs may contain sensitive information in various instances. This is one of the reasons why you should keep your log data secure. But what if your log data is tainted? Isn't it preferable if sensitive information was not preserved in logs? When it comes to what is first kept in a log entry, you may not have a choice. Remember that log information might come from a variety of sources, including apps and resources that may not allow you to tailor what is logged. This is the ideal course of action if you can control what is delivered to your logs and prevent putting sensitive material in the logs in the first place. Log scrubbing is the next best option if this strategy is not possible. Log scrubbing is the process of searching for and removing sensitive information from logs using a tool. Assume a log entry contains a customer's Social Security number (which, fortunately, will never happen!). A log scrubbing tool might be used to look for a number that follows the following pattern (N indicates a single digit): NNN-NN-NNNN. If this value is found, the log scrubbing tool can either erase it or replace it with another value, such as "data removed."

Performance monitoring Performance monitoring is intended to evaluate whether your cloud resources are responding at an acceptable level or standard as defined by a SLA. Cloud resources are frequently dispersed, which can influence performance. Additional cloud qualities, such as network security, hardware resources, and application configuration settings, may also have an impact on performance. Application Performance monitoring is frequently related with a set of technologies known as Application Performance Monitoring (APM). These APM tools can help you identify how well the cloud resource is responding and where a bottleneck may be causing poor performance.

Infrastructure components Infrastructure components are a large category of resources that might include the network itself, hardware components (such as CPUs and memory), facilities (such as power), and software. These are the kind of components that often have an impact on performance and are the major resources evaluated by performance monitoring software.

Resource utilization Consider CPUs, memory, network bandwidth, and disk space to be resources in the context of resource usage. The more resources your cloud infrastructure consumes, the greater your cloud environment's © Secbay Press

207 | Page

Secbay Press costs. Monitoring resource use is intended to detect which elements of your cloud infrastructure are utilizing these resources. The purpose is to assist you in making better decisions about how to configure your cloud environment to effectively optimize the available resources.

Availability As you might expect, many of the components of your cloud architecture must be online nearly all the time. While 100 percent would be ideal, there is nearly always the possibility of brief periods of downtime. The goal is to limit this downtime to a minimum. You can use monitoring tools to check for the availability of a cloud resource actively or passively. The findings should be compared to the minimum uptimes specified in the cloud vendor's SLA. SLA-defined uptime requirements An SLA is intended to safeguard both the cloud provider and the customer by explicitly specifying the levels of service that the cloud vendor will offer within the restrictions that have been specified. The SLA typically outlines the minimum standards that the customer should expect for issues such as the ones listed below: ● ● ●

Availability Speed Responsiveness

In addition, the SLA usually specifies what the cloud vendor is responsible for and what the customer is liable for. An SLA may also address data ownership, disaster recovery, and specifics about the hardware used by the cloud vendor.

Verification of continuous monitoring activities Consider the following scenario: you have chosen to have a company monitor the alarm system that you have installed in your home. You place your trust in this organization to keep an eye on the status of your alarm to keep you and your home secure. Assume your alarm goes off one night and... Nothing occurs. There was no reaction from the alarm monitoring business, no police were deployed to your home, and there was no response at all. Obviously, you want to prevent this problem, so you choose a reliable firm to monitor your alarm and ensure that it does routine checks to guarantee that the alarm monitoring system is functioning properly.

Service management tool integration Service management, often known as information technology service management (ITSM), is the process through which a company offers information technology (IT) services to its clients (which also include employees). When a company uses the cloud to provide services, the monitoring solutions that are used must be connected with the service management tools in order to give rapid reaction times. Assume an organization offers a service to its clients that is based on a cloud-based database. Obviously, if the database is no longer available, this service is rendered useless. The tool used to monitor database © Secbay Press

208 | Page

Secbay Press availability must be configured to "speak to" the ITSM software in order to give IT support with accurate, up-to-date information on the service's availability.

4.1.3 Alerting When a threshold is achieved or an error occurs in your cloud environment, one of the activities that monitoring tools might do is to issue an alert. This section will go through some of the most important aspects of alerting.

Common messaging methods Where you send an alert is just as essential as deciding which notifications to send. The following are some of the most prevalent alert message mechanisms seen in cloud environments: ● ● ● ● ● ●

SMS (that is, a text message to your mobile device) Email A messaging service, like AWS Simple Notification Service (SNS), Pager Duty, or Data dog A server via an API call A storage container, like a bucket A messaging tool, like Microsoft Teams or Slack

Enable/disable alerts In some instances, you should disable alerts. For example, if an issue causes many notifications, you may wish to turn off the alert system until the problem is rectified. Disabling alerts also prevents false warnings when a cloud service or resource is in maintenance mode (such as updating software or resolving problems). It is extremely possible that software in maintenance mode may generate alerts as it is being worked on.

Appropriate responses Some alarms may only necessitate extra investigation and observation, whilst others may necessitate quick action. Creating a good response strategy is just as critical as deciding which alerts to produce and where to send them. Creating a policy for categorizing and conveying alerts is part of building a response plan. More information can be found in the next section.

Policies for categorizing and communicating alerts Not all notifications are equal in importance. For example, an alert showing that a critical database is no longer available should be prioritized over an alert suggesting that the same database may require additional storage space in the near future.

© Secbay Press

209 | Page

Secbay Press You should create a system for categorizing alerts. This policy can be based on the alert's level of importance or other factors, such as the following: • • • • • •

Event Correlation Intelligent Applications that can look at reported events Single event can trigger a flood of alert messages More complex networks = more log messages Enable Disable Alerts Common messages- Email, API,etc

Event Correlation Event correlation is the method, or process, that makes sense out of a large number of reported events from different sources and identifies the relationships between the events. Intelligent management applications have the ability to look at all the reported events from many different sources and correlate, or determine, how they are related to each other. A single event can trigger a flood of alert messages that need to be sorted through to determine what was the trigger and which of the other events were just in response to the main event. The larger and more complex the network becomes, the more log messages are generated and the greater the need there is to analyze the data with intelligent applications. For example, assume that someone in IT operations configured a network access control list that blocked an application server from reaching a backend database on your e-commerce site. This would probably trigger a lot of alarms that are outside of your baseline metrics! You would be getting nonresponse alerts from the web servers trying to access the database, the application servers would report timeouts from the database server, and the network would show deny operations based on the Network Access Control List (NACL) that was defined. The database may even report that there had been zero table read or write operations and so on. All of these alarms are written to the logging server and need to be analyzed and reviewed to determine what is happening so the root cause can be identified and the service restored. Using event correlation, all the events in the logging system will be scanned and processed to identify the root cause of the application server not being able to reach the database. Most of the alarms are generated not because they are the fault but, rather, were affected by the original fault event. In this case, the incorrectly inserted access control list statement that blocked the communications was identified by the correlation application as the root issue, and all of the other alarms were because of this incorrect database deny rule. As you can see from the example, event correlation is a valuable tool to have! It can be time-consuming and error prone to manually sort through hundreds or thousands of log messages trying to find that one event that caused the issue. You learned about notification systems that process the alerts and distribute them to users and systems using such channels as text, e-mail, and API calls. Triggers can be identified in many different devices and applications including data collected by the monitoring systems. When the alarm threshold is triggered, or reached, the event can be sent, or published, to a notification service that you access as a service from a cloud provider. This alerting method is referred to as a push service as it sends the events to the subscribed endpoints as they occur. Many different devices and services can be subscribers to the events queue that the notification system © Secbay Press

210 | Page

Secbay Press publishes. The push notifications support a large number of options such as texting; e-mail; messages to Apple, Google, Amazon, or Windows operating systems; service queues; Application Programmable Interfaces (API) calls; the ability to run a script on a server; and many other operations. The notification services offer a single event to many different devices that can act upon the received information.

4.2 Given a scenario, maintain efficient operation of a cloud environment. 4.2.1 Confirm completion of backups •

Backups are a critical piece of your overall IT strategy and will usually be required for regulatory compliance and corporate policy.



Backups may be performed by the cloud provider as part of its service offerings and as a managed service.



Cloud customers may also want to implement their own backup strategy to meet their specific needs.

In this section, you will become familiar with the different types of backups and what the best use cases are for each one. Backups are a critical piece of your overall IT strategy and will usually be required for regulatory compliance and corporate policy. Backups may be performed by the cloud provider as part of its service offerings and as a managed service. Cloud customers may also want to implement their own backup strategy to meet their specific needs.

4.2.2 Life-cycle management Roadmaps Roadmaps in lifecycle management give a timeframe for the product's implementation from start to finish. Roadmaps are also used to align products with corporate goals and to give an easy approach to visualize project lifecycle management. Because the visibility of the process varies from person to person, each product may have many roadmaps. Executives, for example, may just need to see the "big picture," whereas implementers must see every detail. Customers may have a different roadmap since they are more concerned with when features will be introduced. Roadmaps are frequently classified into one of the following categories: Features roadmap: This roadmap describes when features will be added to the product. Release roadmap: This roadmap describes when each version of the product will be released. Portfolio roadmap: This roadmap is a collection of product lifecycles and how they are related to one another. Strategy roadmap: This roadmap defines the overall high-level actions that must take © Secbay Press

211 | Page

Secbay Press place to meet the goals of the product during its lifecycle.

Old/current/new versions The process of managing a product's lifespan must include how different versions of the product will be maintained and supported. Assume the most recent release of a product is version 3.2. How is version 3.1 handled in this case? Is the organization still behind it? Is the product still getting regular patches? What actions are required to transition clients from 3.1 to 3.2? And, of course, you must ask the same (and more) questions for version 3.0 and any older version. You must make judgments for new versions involving new features and the migration procedure from previous releases. How does the company inform customers about future new features? What happens if a new feature isn't available in the initial release date? Will certain new version features be included in earlier versions that are still supported?

Upgrading and migrating systems Upgrading a system is the process of improving an existing system to add new features or improve performance. For example, you may add more RAM to an existing machine to enhance the amount of memory available to an operating system. The act of relocating a resource from one physical location to another is referred to as system migration. Instead of simply increasing the amount of RAM in a system, you may opt to migrate the operating system to a new hardware platform equipped with a greater amount of RAM than the original. In cloud computing, the same ideas apply. In the cloud, operating systems are housed in virtual machines (VMs), which include underlying hardware components. In certain circumstances, you may be able to improve an existing virtual machine (for example, by adding more virtual memory), while in others, migrating to a new virtual machine may be a superior alternative.

Deprecations or end of life Most products will eventually cease to serve a useful purpose or will no longer correspond with the organization's strategic or business needs. In most cases, an organization will choose one of two approaches: Specify the product's end of life: This method entails notifying when the product will no longer be supported and should be avoided. It is also known as sun setting product. Deprecate the product: When an organization takes this strategy, it signals that the product should no longer be used and is unlikely to be supported in the future. Typically, an obsolete product is replaced with a newer product, but the business is unwilling to push customers to switch. Consider deprecated to imply "it is still available, but the developer no longer recommends that you utilize it." Upgrades Upgrades are a natural part of an application life cycle, and there are many different types of upgrades and patching that can occur. It is important to remember that software will most likely undertake many upgrade cycles over its lifetime, and each upgrade will need to be managed through the change management process.

© Secbay Press

212 | Page

Secbay Press Migrations When a new application is brought online, there may be a need to migrate or move data from an older application or data store. Migrations need to be project-managed and controlled as part of both the life cycle and change process. If the applications are significantly different, the format of the user data may need to be modified as part of the migration process. Feature Additions and Deletions An application will undergo upgrades and modifications over its lifetime. Some will be to resolves issues through the patching process, and others will be to add new features as they are released. While less common, features can also be removed to save money by not paying for unused license fees. Replacements Over time applications will be replaced, and the process of project management to assist in replacing the existing application with a new or different approach must take place. This is a normal part of the lifecycle process. Retirements Applications will, at some point, be removed after having their data migrated to a new system or replaced. This is the end of the complete life cycle, and it will most likely start over with the deployment of a replacement application.

4.2.3 Change management •

Process of managing all aspects of the ongoing changes, upgrades, repairs, reconfigurations



Standardized procedures to follow.



Change request process



Change coordination



CAB/CARB- Change Advisory Board/Change Review Board

Change management is the process of managing all aspects of the ongoing changes, upgrades, repairs, and reconfigurations. Change management involves planning and managing changes to minimize any disruptions of service. Change management outlines policies and procedures and provides a standardized process to follow, including recording the change, planning for the change, testing, documentation, approvals, evaluation and validation, instructions for backing out the change if needed, and post-change review if desired. A change management procedure usually includes the name of the requester, what the change is going to be, and the reason or justification for making the change. Other areas include a description of the expected result of making the change and what risks are involved. You must also outline what resources will be needed and coordinate the activities of the various groups involved in the change. A list of individuals responsible for the various aspects of the change, including the design, configuration, deployment, and validation steps, must be prepared. There also needs to be an investigation into other © Secbay Press

213 | Page

Secbay Press changes that are taking place to make sure no conflicts exist between those changes and yours. Also, if one change requires another change to take place before it can be implemented, the change sequences will have to be coordinated.

4.2.4 Asset management Asset management encompasses all the activities necessary to ensure that an organization's assets, including physical, financial, and information-based assets, are managed in a systematic manner. Because management is aware of present assets and how they might be used to current and future company needs, the firm can make sound business decisions. Asset management entails a number of processes, including: ● ● ● ● ● ●

Considering the need for future assets Purchasing assets Managing the Assets Keeping the assets Keeping the assets safe When assets are no longer required, they must be disposed of.

Configuration management database (CMDB) •

The configuration management database (CMDB) is a family of applications designed to track inventory and configurations of your deployment.



The CMDB can automate your change management steps in an archived online database



Many cloud providers offer applications that backup and inventory cloud resources

Document Action and Back-Out Plans All change request documents must include a detailed plan on the steps to be taken, if required, to back out the changes. The plan can include all CLI commands or web GUI input and the expected output. If verification fails, the same change request document must outline specifically how to back out the change and verify that it has been removed. Remember, maintenance windows can be short, and your systems must be fully operational at the end of the allocated maintenance time. There will be no time to do any research; you must have everything documented before beginning the change. The configuration management database (CMDB) is a family of applications designed to track inventory and configurations of your deployment. The CMDB can automate your change management steps in an archived online database. Many cloud providers offer applications that automatically backup and inventory your cloud resources. They can also allow you to add fields such as change documentation and roll-back, or back-out plans to the database. If an actual CMDB system is not feasible, or overkill for the sized of your operations, a simple spreadsheet may be all that is required. Using a standard spreadsheet, you can record and track your actions taken, and add columns as desired to add detail to your documentation. The key point is to use whatever application works best for your operations but to always track and document your changes.

© Secbay Press

214 | Page

Secbay Press Adding and Removing Cloud Resources As your cloud deployment continues to evolve and adapt over time, you'll find the need to add and remove resources to optimize and upkeep your cloud environment. This section will provide insights into effectively managing your resources.

4.2.5 Patching •

All software has bugs that need to be patched.



Security patches come out on patch Tuesday, zero days, media articles, etc.



Microsoft Security Response Center- resource for patching



NIST CVE database

Applying Security Patches All software has bugs that must be patched to prevent bad things from happening. When new bugs are discovered, the details are often published or leaked to the media. Individuals with malicious intent can utilize this information to create exploits, capitalizing on these vulnerabilities, given that a majority of systems are outdated and lacking in protection. It is important to keep on top of the security landscape and have a planned approach to patch management. This section will focus on securing your cloud fleet of servers and services as it relates to keeping your systems up to date. As part of a shared security model, the cloud provider will secure the infrastructure, and the cloud customer will be required to address security concerns based on the service model they are operating under. A complete patch management approach must be developed, implemented, and adhered to prevent a security breach. Cloud Element Security AWS Systems Manager Patch Manager automates the procedure of patching managed instances with both security related and other types of updates. AWS does not conduct testing on patches for Windows Server or Linux prior to their availability in Patch Manager. Furthermore, Patch Manager does not offer support for upgrading major operating system versions, such as transitioning from Windows Server 2016 to Windows Server 2019 or upgrading from SUSE Linux Enterprise Server (SLES) 12.0 to SLES 15.0.

Features or enhancements While many patches are aimed at addressing a problem with a product, others are designed to introduce new features or improve the software. These improvements are normally added during a major product upgrade, however in some situations, a client may request a new feature, which must be implemented before the next upgrade cycle.

© Secbay Press

215 | Page

Secbay Press Fixes for broken or critical infrastructure or applications One of the most typical reasons for deploying a patch is to address a problem with the product. This is sometimes referred to as a bug fix, and it is intended to be a temporary patch until the next product update.

Scope of cloud elements to be patched Because cloud resources are frequently interconnected, changes to one resource frequently necessitate patches to others. The scope of the patched cloud elements covers all cloud resources that must be patched to effectively deploy modifications. ● ● ● ● ● ● ● ● ●

Hypervisors VMs Virtual appliances Networking components Applications Storage components Firmware Software OS

Hypervisors As you now know, a hypervisor is the virtualization software running directly on top of the server hardware that enables the hardware to be virtualized into pooled resources that are, in turn, allocated to virtual machines. This is of course critical software that is often hidden from sight but at the core of a virtualized data center. Since there are many virtual machines running on each hypervisor and given that the hypervisor is a core component of the server, most, if not all, operations on the hypervisor will require that it be taken offline when patching or upgrading takes place. To avoid disrupting the functioning of all virtual machines operating atop the hypervisors, it is essential to migrate each VM to another server before initiating any patching activities on the hypervisor. Once all active services have been migrated off the server, then patching can be performed on the hypervisor while it is offline. Hypervisor code will require periodic maintenance to address security vulnerabilities, fix feature deficiencies, and add functionality. Virtual Machines VMs will require patch management as well; this can be accomplished with automation software that tracks patches and that can download, implement, and validate the installation of the patched VMs automatically. Also, snapshots and cloned images can be used to create a master copy of a virtual machine with all the required patches installed and tested. These cloned images can be used as the master template when updating VMs.

© Secbay Press

216 | Page

Secbay Press Virtual Appliances Just as VMs require a patch management process to ensure they are kept up-to-date and secure from exploits, virtual appliances need to be patched as well. A virtual appliance is often a specialized application running on a version of Linux, so it has its own set of deficiencies and vulnerabilities that require ongoing patch management. A virtual appliance may be a managed cloud provider service, and as such, it will be the provider’s responsibility to keep all the managed appliances up-to-date. However, in the cloud marketplaces, many appliance vendors’ offerings (including firewalls, load balancers, and IDSs/IPSs, for example) license their products to you, so it is your responsibility to perform the ongoing maintenance of these applications. Most offer support agreements that allow access to vendor support forums, technical support teams, and software updates. There are many different virtual appliance offerings from many different companies. Each vendor offering virtual appliances will have its own security and patch management offerings based on its own product offerings. Network Systems Network systems such as firewalls, IDSs/IPSs, load balancers, routers, and DNS servers are offered as managed services by most cloud providers and are also available for you to install and manage yourself. The entity with control of the system will have responsibility for keeping the devices up-to-date with the current security packages. It is imperative that if you are responsible for any of the network elements that you keep them up to current revision levels. This is required because most network elements sit at the core of your cloud fleet of servers, and most data passes over the network. If the network can be compromised, then a malicious hacker may be able to use the network to attack all of your systems. Applications The application or development teams will generally be responsible for patching and maintaining systems at the application level. If you are purchasing a Software as a Service solution from your cloud provider, then the provider will assume responsibility for maintaining and ensuring that the applications are current and secure. However, if the application was developed internally or you are running an IaaS or PaaS service, then you will be responsible for application maintenance. Many applications will check automatically with a central repository to make sure they are current, and if they are not, they can download and either install the update or prompt you that there is an update available and allow you to manage the process. Automation software such as Puppet or Chef can also patch applications if configured to do so. Automated scripts can also be run on the servers that check for and install application updates. Storage Systems SAN Fibre Channel switches, storage controller heads, host bus adapters, and RAID controller software all undergo the revision cycle similar to any other software-based system. Due to their critical role and the requirement for continuous uptime, storage systems are designed with high levels of redundancy. © Secbay Press

217 | Page

Secbay Press This redundancy permits some flexibility when it comes to updates, as one component of the storage system can be temporarily taken offline for maintenance while the backup system takes over full operational responsibilities. Clusters Clusters are groups of computers interconnected by a local area network and are tightly coupled together. Clusters can be configured in many different topologies depending on the use case and for the different solutions they are designed for. However, all clusters are designed for high availability, which can allow for installing patches with zero downtime. Depending on the capabilities of the cluster software, most individual components in a cluster can be taken offline without the need to take the whole cluster down. This allows for individual systems to be upgraded while the cluster is live. Another common approach is to upload the patch to a master or controller node in a cluster and have the cluster management software perform the upgrades internally. Patching Methodologies • • • • • •

Install the most current updates Reduce attack footprint Test the patches before implementing Patch Management – baseline Policies n-1 – one release older than most recent release Rollback- Have a plan to undo the change if something goes wrong

This section discusses the different types or methodologies of patching. Patching is an important requirement for ongoing cloud operational support. By making sure all of your systems have current patches installed, you reduce your attack footprint and protect yourself by addressing known bugs that may affect your operations. There are different approaches to take when patching your systems, and there are different types of patches for different requirements that you will explore in this section. Patch management can be a rather complex topic area. It is suggested you work to keep all of your related systems at the same version, revision, or patch level. This will allow you to have a known baseline for you to operate on and to troubleshoot. There are applications that keep track of version levels, while some network administrators prefer to use a basic spreadsheet and manually track software version levels that are installed on platforms either on-premise or in the cloud. Given that there are often a large number of patches released every week or month, it can become a chore to track them and decide which ones demand immediate attention to solve a critical issue and which ones are minor or cosmetic and can be safely ignored. You must also know the proper order to install these patches in. It is helpful to classify the risk and to understand with a cloud-based shared responsibility model who is responsible for patching certain systems and who is not. Systems may sit behind a firewall or inside of a security group that may protect the system from an outside attack even if the patch has not been installed. The likelihood of an attack should be assessed, including how critical the system is. Is there a bigger operational risk installing the patch than leaving the system as it currently is? It usually comes down to how severe the threat is, the potential impact it may have on your operations, the probability you could get exploited by your unpatched systems, and the cost of time and monetary impact to recover if you were to be attacked by choosing not to install the patch.

© Secbay Press

218 | Page

Secbay Press The typical suggestion is to uphold a policy for software and hardware updates, wherein you aim to maintain an N to N-1 strategy. In this context, "N" represents the most recent service pack, patch, major update, maintenance release, driver, firmware version, etc. Ensuring that your software remains current is especially crucial when dealing with cloud or hybrid environments. Risk-averse organizations often refrain from utilizing the very latest software release and instead adopt a "N-1" approach, which means running a release that is one version behind the most recent one. Production Systems Implementing patches within the production environment carries inherent risks. Furthermore, there are additional measures to contemplate to ensure the success of a patching operation. Production updates will need to follow the change management process, and a plan should be developed that outlines the upgrade steps, the validation and testing process, and the plan if you need to remove or roll back the patch after experiencing undesirable results. It is advisable to conduct testing of all patches in a non-production environment before implementing them in a production rollout. Especially for a large number of servers, automation is recommended to ensure consistency in the patching process. Various automation tools like Puppet, Chef, Openstack, or Ansible can be employed for this purpose. Following the patching, thorough testing should be carried out to verify the successful application of patches and the system's proper functioning. In case any issues arise, having a rollback plan in place is essential to revert the system to its original state There is a risk to patching production servers, but there •

Hotfix- software update to fix immediate problem and quick release procedure

• Patch- A patch is an update that fixes a known bug or issue. Generally synonymous with security updates • Version Update: Involves the procedure of substituting an existing software product with a more recent iteration of the same product. •

Rollback- A rollback is the process of returning software to a previous state.

In this section, you will learn about the various types of updates that are available and the problems they are intended to solve. During your career in cloud computing, you will find that these terms are often used interchangeably. This section will allow you to understand what the terms really are and how they should be used. Out in the field, your mileage will vary widely! Hotfix A hotfix is a software update type that is intended to fix an immediate and specific problem with a quick release procedure. A hotfix may be customer-specific and not released to the public or available to everyone. Many times a hotfix is a bug fix that has been made quickly and did not follow the normal quality assurance or formal release procedures since the intention is for a rapid deployment. Because of the urgent nature of supplying a bug fix to a critical issue, a hotfix has a higher risk factor than other approaches to patching software. Testing the patch is usually quick using a subset of the full QA test suites. Installing a hotfix may introduce new and unknown issues into your environment. © Secbay Press

219 | Page

Secbay Press The risk of installing a hotfix to correct a bug and introducing new problems must be weighed against not taking action and living with the bug until all testing has been completed. Patch A patch is an update that fixes a known bug or issue. The patch is a piece of software that is intended to update an application, operating system, or any other software-based system to fix or improve its operations. Generally, patches are synonymous with fixes for security vulnerabilities or any other type of operational bug. Patches do not offer feature enhancements but rather fixes or performance enhancements. Patches can be thought of as bug fixes or “improving usability and performance” or “removing an undocumented feature”! Version Update A version update involves substituting a software product with a more recent iteration of the same product. These updates can introduce new features, bring the system to a current state, and include a cumulative collection of prior patches to enhance the product. Upgrading, on the other hand, entails the replacement of the current, older version of the product with a newer one. Often a version update includes major feature enhancements and capabilities. As such, there may be a charge applied with the upgrade. Think of going between versions of operating systems from Linux, Apple, or Microsoft.

Rollback A rollback is the process of returning software to a previous state. If a software update failed, did not correct the issue as expected, or introduced new issues that require you to downgrade the system to its original sate, then a rollback should be performed. The process of performing a rollback is dependent on each system. If it is a VM, then a snapshot of the current VM could be taken prior to installing the patch, and if there is a need to perform a rollback, then the snapshot could be made active with the original installation. Some applications and operating systems have scripts or utilities that allow the patch to be rolled back as needed. In other cases, it may be necessary to re-install the original software package to overwrite the patch. The term rollback is also common when working with databases and refers to an operation that can return the database to a known previous state to maintain database integrity. If there is a severe database issue, a rollback can be performed to a point prior to the issue being present.

4.2.6 Impacts of process improvements on systems Process improvement approaches are used by firms to grow, fulfill new demand, and keep up with competition. Process improvement is a business approach in which an organization's business processes are evaluated and analyzed in order to uncover changes that will result in greater performance, enhanced customer experience, lower costs, and other benefits to the firm.

© Secbay Press

220 | Page

Secbay Press Process Improvement can have an impact on cloud systems in a variety of ways, including ● ●

Policy changes that can have an impact on the accessibility, security, and performance of cloud resources. Changes in how an organization's cloud resources are utilised. This can be a modest modification (such as switching from one software to another) or a large modification (changing cloud vendors).

4.2.7 Upgrade methods Rolling upgrades The method of giving frequent software updates is known as rolling upgrades (also known as continuous delivery). There are no specific release milestones with this update approach (although it is usual for rolling upgrades to happen nightly), but rather a new upgrade is issued when the developer is ready. One advantage of a rolling upgrade is that new features are given to clients more quickly. Rolling updates, on the other hand, may be more prone to bugs. Most developers who offer rolling upgrades also offer regular "release point" upgrades. For example, the popular Firefox web browser offers both a nightly rolling update and regular normal releases.

Blue-green When you use a blue-green deployment, you have two environments that are identical (production and staging). Within your organization, the production environment is active and in use. In the final stages of delivering a new version of the solution, the staging area is employed. This implies that modifications made in your QA environment are transferred to the staging environment, and some final tests are run. When all tests pass, the staging environment is turned into the production environment, and the production environment is now viewed as the staging environment. If the solution continues to function successfully, updates to the original staging environment are made and then applied to the new staging environment. As a result, the two are once again identical. This is known as a blue-green deployment because one environment is generally designated blue and the other green. It should be noted that the production or staging environment might be either blue or green at any one time. The benefits of employing this strategy include more seamless updates, fewer downtime, and the flexibility to swiftly roll back a deployment to a previously operating environment. The increased expenditures and effort required to manage two settings are the system's drawbacks.

Canary You may have heard of workers bringing a canary bird into the mines to detect if the air contained unsafe amounts of harmful chemicals. The assumption was that the canary had a faster respiratory rate than humans and would detect the presence of harmful chemicals faster. A canary construction is based on a similar notion. New features are distributed to a select group of beta testers (pilot users) to see if they have any detrimental influence on the product. The features are supplied © Secbay Press

221 | Page

Secbay Press in a very precise manner in the new beta versions and are often spaced out over numerous beta releases. This sort of build provides developers with some insight into whether new features may have created an issue, as well as time to resolve the issues before releasing the product in a stable release.

Active-passive A blue-green upgrade is analogous to an active-passive upgrade. An active-passive upgrade deploys the upgrade to the active environment while leaving the passive environment unchanged. If there are any problems in the active environment as a result of the patching, the passive environment is temporarily treated as the active environment. When the difficulties are solved in the original active environment, the original passive environment is reclassified as passive, and the original active environment is reclassified as active. If the active environment appears to be functioning properly after testing and a specified amount of time, the patch is applied to the passive environment. This upgrade differs from a blue-green upgrade system, in which the two systems "flip-flop" between active and passive upgrades.

Development/QA/production/DR You may have numerous similar surroundings that will receive modifications at different periods. For example, rather than relying on a single web server deployment, you would most likely want to have numerous installations, each with a distinct function. Among these deployments are the following: Development environment: This is where you write or test new software code. Quality assurance (QA) environment: This environment should be as close to your production environment as possible. This environment is used to test upgrading operations before they are deployed in a production environment. Customers use the program in the production environment. The disaster recovery (DR) environment is substantially similar to the production environment. If the production environment is unavailable for any reason, the DR environment should be able to assume the production environment's role. Development Systems Implementing patches in the development system enables complete integration of system updates before undergoing quality assurance (QA) testing and deploying your applications or services. The DevOps team should assess these patches and incorporate them into their product as soon as it is feasible. Ideally, this process contributes to a more enhanced product release. Quality Assurance Updates should be validated by the software vendor, the cloud company, and your operations staff to determine how the patch will perform in production. It is important that you establish a quality assurance test bed that allows you to test the patch in your specific environment to identify dependencies, interoperability issues, unknowns, and, most of all, whether the update fixes the problem you are © Secbay Press

222 | Page

Secbay Press experiencing! For the software vendor that has developed the patch, it is critical that the vendor performs a full suite of QA tests of the package. This will be to verify that the fix does in fact fix the issue it was intended to fix. There are also QA tests done to make sure that the fix does not interfere with other processes and that there are no memory or buffer issues experienced with the patched version of software. Performing Rolling Updates Rolling updates refer to the ongoing delivery of software updates or patches for operating systems and applications. Synonyms for rolling updates include rolling releases and continuous delivery. These updates typically involve frequent but minor changes. In the realm of Linux, rolling updates are prevalent and are often distributed as remotely accessible packages via the Internet. The process of a rolling update unfolds sequentially, with one system being patched at a time.

4.2.8 Dashboard and reporting Tagging Tags are pieces of metadata that can be attached to resources. For example, you may have a "department" tag that can be adjusted to values such as "sales," "development," or "production." Tags are useful for reporting since they allow you to define rules such as "show me all the virtual machines that have a value of ‘sales' for the tag named 'department.'" This information is crucial when determining which department, project, or individual is in charge of resources (and how much that is costing your organization).

Costs Using the cloud can result in cheaper operational costs for your organization, but if you're not careful about how your resources are used (and how many resources there are), a cloud environment can end up costing your firm a lot of money. There have been horror stories of organizations that were unaware of active resources and were surprised with a hefty fee when their end-of-month bill arrived. Reviewing a dashboard that shows the current costs incurred for the month daily can significantly reduce the likelihood of a hefty surprise bill at the end of the month. Most cloud companies also make it simple to set up notifications if your current expenditures reach a certain threshold. Chargebacks Consider a huge corporation with multiple departments, each of which is financially liable for its own resource use in your cloud architecture. Although you could create separate cloud accounts for each department and have the cloud vendor bill each department separately, there are several reasons why this is a bad idea. Higher overall expenses: Most cloud vendors will cut costs as your usage volume increases. In other words, a single account with 1,000 virtual machines (or several accounts aggregated together for billing purposes) would pay less overall than 15 individual accounts with 1,000 virtual machines. © Secbay Press

223 | Page

Secbay Press Less visibility: Even though each department is financially accountable for cloud resources, your IT staff must still be able to observe what is happening in the organization in terms of cloud consumption. Less control: IT departments strive to provide consistent controls (such as availability and security) throughout the enterprise. Having numerous cloud accounts makes this extremely tough. Show backs First, review the preceding "Chargeback" section to grasp the concept. Assume an IT department wants the benefits of a single billing account that includes numerous department cloud accounts, but corporate policy prohibits chargebacks. In this instance, the IT department will almost certainly use a show back, which displays resource consumption and expenses for each department but does not result in a charge to the department.

Elasticity usage In cloud computing, elasticity refers to the capacity to expand (scale up) or reduce (scale down) resource capability based on demand. Consider an application that accepts POS data (credit card charges), validates the data, and then sends the data to a database. The store from which this data is derived is open from 8 a.m. to 8 p.m. daily, with a spike in sales from 5 p.m. to 7 p.m. In addition, the store provides holiday deals and a monthly sale on the 15th of each month. As a result, during the hours of 8 p.m. to 8 a.m., this application utilizes little to no compute capacity. It requires regular compute capacity from 8 a.m. to 5 p.m., as well as additional compute capacity from 5 p.m. to 8 p.m., on holidays, and on the 15th of each month. Of course, this is a broad generalization, and the required capacity may vary widely depending on new product lines, when clients receive their own salaries, and other variables. The optimum option may thus be to schedule compute capacity depending on sales hours and the previously described parameters, but also to monitor server activity and have the compute capacity automatically scale up or down as needed. You may also examine the data over time to see how frequently the capacity scales (the elasticity consumption) to better predict normal versus high-volume capacity requirements.

Connectivity Assume you have a vital cloud resource, such as a database, that must be always available. What would you do if the resource stopped responding? You could rely on user feedback, but it isn't the best way because user feedback sometimes takes time to reach the right individuals. Ideally, you would use a dashboard or an alerting system to monitor connectivity to essential systems.

© Secbay Press

224 | Page

Secbay Press Latency Network latency is the amount of time it takes for a network packet to transit from sender to recipient. Large latency values have a significant influence on service performance as well as user experience. Because of the consequences of latency, it is critical to provide an easy approach to detect latency issues. Some cloud companies enable you to monitor latency and display it on dashboards.

Capacity The quantity of available capacity of a certain resource, such as network bandwidth, storage space, or computational power, is referred to as capacity. One of the more prevalent aspects of dashboards is monitoring capabilities.

Incidents Incidents are another element that is frequently encountered on dashboards. Incidents are like alerts that notify when an action has occurred.

Health A health dashboard will tell you if there are any concerns with resources or services. Cloud companies, such as AWS, generally provide a health dashboard that shows how the vendor's services are performing.

Overall utilization Dashboards display overall use as a "big picture" view. This data provides an overview of how a resource or service is used and how to plan for future utilization patterns based on existing trends.

Availability As you might expect, many of the components of your cloud architecture must be online nearly all of the time. While 100 percent would be ideal, there is nearly always the possibility of brief periods of downtime. The objective is to limit this downtime to a minimum. You may use monitoring tools to actively or passively check for the availability of a cloud resource. The findings should be compared to the minimum uptimes specified in the cloud vendor's SLA. Using the Cloud Dashboard Cloud dashboards are incredibly useful and informative. It is common to display dashboards in operations centers or overhead in office environments to give an easy-to-read overview of operations. Dashboards are usually graphical and color-coded for quick notification of potential issues. Dashboards are offered by the cloud providers, your internal monitoring and management applications, and any outside monitoring services you have contracted with. They allow you to define what you want to display and in what format. Dashboards are completely customizable and rather easy to configure. Once the dashboard has been defined, users with access rights can easily access them with a standard web browser.

© Secbay Press

225 | Page

Secbay Press Tagging- Tagging increases security and makes for efficient searches of the data. Many metadata tags can be applied to data and are usually in a type-value format where a tag type is defined and then a value is given for the type. For example, a type could be public, and the value could be a country such as Germany, so the pair would be public: Germany. Public cloud vendors offer classification for accessing data and allow you to assign metadata based on your needs. Cloud management includes accounting mechanisms for measuring consumption, billing, and generating management reports. In this section, you will learn about these nontechnical but important aspects of cloud operational management. Company Policy Reporting Companies will publish and manage IT polices; these policies cover a wide range of subjects including, but not limited to, how cloud services are consumed and accounted for. To effectively measure compliance, you will need to collect the required data and be able to process the information into effective reports. Cloud providers are aware of policy reporting and offer services to assist you in collecting and presenting reports. These services are cloud-based and can be remarkably customizable. They are presented in a graphical format in a web browser dashboard. Also, the reports can be exported to Excel or PDF format. Elasticity Usage As you learned earlier in this chapter, one of the great benefits cloud computing offers is elasticity and the flexibility automation offers in adding and removing capacity. Elasticity events often incur charges and are also important to monitor to ensure that your cloud operations are scaled correctly. Management applications can generate usage reports on a large number of events including elasticity. Metrics such as the event time and duration are recorded as well as details of the added capacity and utilization metrics that were collected after the scale-up or scale-down events occurred. Also, comparisons to performance and SLAs are important reports to generate to meet compliance with regulatory needs. As with the other reporting types, you can generate the reports in formats that meet your needs. Connectivity Corporate management and various departments in your company will always be interested in how their operations are being accessed and who is connecting to their cloud presence. Reports and graphical presentations can be created to show connections over time, location, new or returning visitors, what was performed (did they buy anything on your e-commerce site?), and how long they were visiting. This is valuable data for sales, marketing, and accounting. Latency Network delays and slowdowns can have an extremely negative effect on cloud operations. If an ecommerce site has high latency, many potential customers may give up and leave, causing lost revenue. Companies that rely on cloud-based services such as e-mail, databases, or any of the other many benefits of cloud computing will suffer productivity declines as they sit and stare at a screen that is not © Secbay Press

226 | Page

Secbay Press responding. Latency in the network can come from many different sources; however, individual cloud bottlenecks all add up to latency, and the end result is frustrated employees and customers. Metrics, benchmarks, SLAs, and proactive maintenance all come together to keep latency low and performance high. Naturally companies want to track and report on how the cloud is performing, and latency is the big picture of performance. Reports can show latency globally and drill down to individual services such as a web server’s response and database read-write latency. Capacity and Utilization Capacity and utilization reporting can include a wide range of metrics including storage, CPU, RAM, network, and so on. These reports are helpful in managing usage trends and change requirements. Accounting will be interested to see that the capacity purchased is being used effectively. As with the other measurements, capacity reports are customizable and offered in a variety of formats. Incident and Health Reports Tracking support services and impairments will give you insight into the overall reliability of operations, and the collected data can be compared to your SLA to ensure compliance. Incidents can be defined as necessary by either your company or the cloud provider. Incidents and health reports encompass elements such as trouble tickets opened, support interactions, and any events that result in the deterioration of your services. Uptime and Downtime Reporting A critical and bottom-line metric of any SLA is that of downtime. If you cannot access your cloud deployment, that is a critical event and must be tracked. Both the cloud provider and your operations center should track downtime and identify the root cause of what caused the outage. These reports can be analyzed to ensure SLA metrics are being met and to see if you have to change your architecture to design for higher reliability and less downtime.

4.3 Given a scenario, optimize cloud environments. 4.3.1 Rightsizing The goal of rightsizing is to guarantee that your instances are using the appropriate number of resources. For example, if you equip a virtual machine with a large quantity of RAM but the virtual machine only ever utilizes a small portion of that RAM, you will incur a greater price than necessary (wasted money). In contrast, if you provision a virtual machine with less RAM than the virtual machine requires, programs on the virtual machine may not run successfully. In both instances, the quantity of memory consumed is insufficient.

© Secbay Press

227 | Page

Secbay Press Auto-scaling When scaling occurs automatically, this is referred to as auto-scaling. For example, you may set up a system in which, when the current resource hits a certain threshold, such as a certain number of client connections, a second resource is immediately launched to handle new demands.

Horizontal scaling Horizontal scaling occurs when extra web servers are spun up to manage rising demand. More resources are offered on the fly to satisfy demand using horizontal scaling. This method is less obtrusive.

Vertical scaling Vertical scaling does not assign extra resources; rather, the existing resource is beefed up to meet the demand. This often means that the resource is given extra hardware, such as more vCPUs, RAM, or storage capacity, in cloud computing. This can be a time-consuming process because the instance may need to be restarted to change the resource configuration.

Cloud bursting Consider the following scenario: your company has already made significant investments in its own private cloud infrastructure. This means you're utilizing on-premises resources to provide cloud-based functionality. However, there is some fear in your organization that the resources available in your private cloud will not be able to meet demand during peak times.

4.3.2 Compute Previous chapters covered the computation and storage aspects. See the following websites for further information on them:

CPUs The central processing unit (CPU) is a chip found on the main circuit board. It is inserted into a particular area known as a socket. As the name implies, its role is to handle instructions issued by the operating system or programs running on the OS. The CPU is the computer's or server's brain. The CPU determines how quickly computational processes are executed. You won't have direct control over the CPU unless you have access to a dedicated server in a cloud environment. Instead, the hypervisor will communicate with the CPU and present your virtual system with a vCPU.

GPUs Some apps demand a significant amount of computing power in order to display visuals. You could, for example, create an application that takes an architect's blueprint and provides full renderings of the building. Video processing can also consume a significant amount of CPU resources. A GPU is a processor that is specifically built to handle graphics rendering data. This processor may not be required for a cloud-based virtual machine in some cases because you may not have any applications that show such data. In other circumstances, though, this may be an important trait. As a result, several cloud providers now supply instance classes with huge GPU processing capacity. AWS's p2, p3, and p4 © Secbay Press

228 | Page

Secbay Press instance classes, as well as Azure's NC-Series instance classes, are examples.

Memory It is critical to know how much memory (RAM) the system will require while installing a virtual machine in the cloud. The cost of operating a virtual machine is influenced by the amount of memory allocated to it. Memory in the context of cloud computing refers to the volatile, temporary storage capacity that a virtual machine (VM) or an instance in the cloud environment uses for various purposes. This memory is often referred to as RAM (Random Access Memory). Here are some key aspects of memory in the cloud: 1. RAM for Virtual Machines: Just like physical computers, virtual machines in the cloud also require memory to function efficiently. RAM is used to store data that is actively being processed or used by running applications and the operating system. The more RAM a VM has, the more data it can store in memory, reducing the need for frequent access to slower storage devices like hard drives or SSDs, which can improve overall performance. 2. Resource Allocation: Cloud providers allow users to select the amount of memory (RAM) they want when configuring virtual machines. The choice of memory allocation depends on the specific requirements of the applications and workloads running on the VM. Allocating too little memory can lead to poor performance, while allocating too much can result in inefficient resource usage and increased costs. 3. Instance Types: Cloud providers offer various instance types with different combinations of CPU, memory, and storage resources. Users can choose an instance type that aligns with their application's memory requirements. For example, an application that performs heavy data processing might benefit from an instance with more memory. 4. Elasticity and Scalability: Cloud environments provide the flexibility to scale memory resources up or down based on workload demands. You can resize instances or add more instances as needed to accommodate changes in resource requirements. This scalability ensures that you pay only for the resources you use, avoiding overprovisioning. 5. Data Persistence: It's important to note that memory in a VM is volatile, meaning data stored in RAM is lost when the VM is powered off or restarted. For data persistence, cloud instances typically use separate storage solutions like block storage, object storage, or databases. 6. Performance Tuning: Properly configuring memory resources is critical for optimizing the performance of cloud-based applications. Monitoring and adjusting memory allocations based on application behavior and resource usage is an ongoing process to maintain optimal performance. In summary, memory in the cloud, specifically RAM, is a crucial resource for running applications efficiently. It plays a pivotal role in determining the performance and responsiveness of virtual machines and instances, making it an essential consideration when designing and managing cloud-based solutions.

© Secbay Press

229 | Page

Secbay Press Containers In many aspects, a container is like a virtual machine. It looks to have access to an operating system (unless it's a server less solution, which consumes fewer hardware resources than virtual machines or containers), yet it has a very small footprint. Because a container will share most of the operating system with other containers, it might be significantly smaller than a full virtual machine. A container has the capability to encapsulate various types of applications, whether it's a web server, a mail server, or any other software, including those custom-built by your organization. Choosing to utilize a container offers several advantages in comparison to a virtual machine. One notable benefit is cost efficiency, as containers are generally more economical to deploy in a cloud environment. This cost-effectiveness stems from their streamlined resource utilization and efficient allocation, allowing you to maximize your cloud infrastructure investment. Additionally, containers boast rapid launch times when compared to virtual machines. They can be initiated swiftly, enabling quicker deployment of applications and services, which is especially advantageous in dynamic and fast-paced operational environments. Scalability is another key advantage of containers. Containers can be effortlessly scaled up or down in response to changing workloads, making it easier to accommodate varying levels of demand and ensuring optimal resource utilization. This scalability feature empowers your organization to adapt swiftly to evolving business needs without the complexities associated with traditional virtual machines. In summary, containers offer a cost-effective, fast-launching, and highly scalable solution for deploying a wide array of applications, making them a valuable choice for modern cloud-based environments.

4.3.3 Storage Tiers Tiers are multiple levels of storage that provide different functions in this test topic. Typically, these levels are determined by the hardware used by the storage resource to hold the data. You should be familiar with the four storage tiers for the CompTIA Cloud+ certification: flash, hybrid, spinning disks, and long-term.

IOPS The input/output operations per second, or IOPS, are the primary speed calculation for a storage device. This is a value that you must consider when deciding the underlying storage type to employ for your storage resource.

Deduplication Data deduplication is the process of eliminating redundant data from a storage resource (or between storage resources). A cloud vendor may have tools to accomplish this procedure in some cases, but in many cases, you will need to develop your own solutions to perform deduplication operations. By removing redundant data, you use less cloud storage, which results in lower expenses. © Secbay Press

230 | Page

Secbay Press Compression The practice of lowering the size of data using a mathematical technique is known as compression. The method is applied to the data, resulting in compressed data that is stored on the cloud (or on-premises). When the data is required, another process converts it back to its original uncompressed format. Compression reduces the amount of space you utilize in the cloud storage environment, which lowers your prices. In other circumstances, particularly with object storage, the cloud vendor handles compression. In other circumstances, such as data kept in file storage, the client is responsible for encrypting the data.

4.3.4 Network Bandwidth The topic of optimizing network bandwidth is extensive. Bandwidth is a numerical measurement that reflects the amount of data that can be moved through a network in a given amount of time. This value is often expressed as bits per second (or megabytes per second or gigabits per second). Bandwidth in cloud computing can be associated with a virtual private cloud (VPC) or with a specific resource.

Network interface controllers (NICs) Physical network interface controllers contain a plethora of options that can be set to enhance how NICs transfer data across the network. While there are some parameters that are shared by all NICs, vendors’ NICs may also include settings that are unique to them. Consider the following as an example of some popular optimization parameters: Jumbo packets: These packets increase or decrease the size of the network payload. Increasing the payload size improves performance in networks that transport large amounts of data. Receive buffers: These buffers use memory to store incoming network packets, reducing the likelihood of packet drops (which could result in having the packets resent on the network, increasing the load on the network). While it is true that you can change NIC settings to boost speed, there are a few things you should keep in mind: In some circumstances, such as jumbo packets, the NICs for all network devices, including the router, must be configured. The maximum transmission unit (MTU) value, for example, would need to be changed. Changing NIC parameters can have the opposite impact intended. Jumbo packets, for example, may boost bandwidth utilization in a network where the payload of the packets is minimal. It is critical to use monitoring tools and analytics to determine the impact of changing these settings. A modification in a parameter designed to optimize network utilization may have a negative impact on another component of the network (such as security). © Secbay Press

231 | Page

Secbay Press In a cloud environment, you may not have access to NIC parameters because the cloud vendor assigns a virtual NIC (or vNIC) to your instances rather than a real NIC. Some vNICs do contain optimization parameters, although they may deviate from the industry standard.

Latency Network latency refers to the duration it takes for a network packet to travel from its source to its destination. Elevated latency values can exert a substantial impact on both service performance and user satisfaction. Network delay comprises several components, which include: Transmission delay: The time it takes for a packet to be sent over a network from the sender's side. A processing delay is a pause on the receiver's end. Before the packet is transmitted to the system, it must be processed when it arrives (checked for errors, determine the destination port, and so on). Queuing delay: After processing, the packet is routed to a queue and held there until the system is ready to use it. A huge queue might cause latency to increase.

SDN Because latency might originate on either the sender or receiver side (or any system/router in between), minimizing network latency can be a difficult endeavor. The following are the most popular approaches used in cloud computing to reduce latency:

Edge computing Use Multiprotocol Label Switching (MPLS): Using MPLS allows for more optimal routing. Use a directly linked network: Most cloud vendors offer a directly connected network (at a cost, of course), which connects private networks directly to the cloud infrastructure. This dedicated network reduces latency naturally, but it may also be configured to reduce latency even further.

4.3.5 Placement The placement of cloud resources can have a significant impact on how well your cloud architecture is optimized. This section will teach you about the four components of resource placement: geographical placement, cluster placement, redundancy, and colocation.

Geographical The geographic placement of your cloud resources is crucial since resources that are close to your users will have lower latency and provide a better user experience. This topic has already been thoroughly discussed in the parts that follow:

Cluster placement A cluster is two or more resources that perform the same function in a cloud environment. These resources operate in tandem, providing various advantages, including the following:

© Secbay Press

232 | Page

Secbay Press Redundancy means that if one resource becomes unavailable, the client can still be served by another resource or resources. High availability: A single resource may become overburdened because of client demands. The service is more accessible because of the cluster of resources. Resilience: Because a cluster has numerous resources, it can be built to recover from a failure faster than a single resource.

Redundancy Redundancy means that if one resource becomes unavailable, the client can still be served by another resource or resources. High availability: A single resource may become overburdened as a result of client demands. The service is more accessible because of the cluster of resources. Resilience: Because a cluster has numerous resources, it can be built to recover from a failure faster than a single resource.

Colocation If your company decides to implement a private cloud architecture, one problem will be offering resources in various geographic areas. Larger firms may have offices with data centers in many regions of the world, but smaller organizations are less likely to have readily available distant data centers. Colocation is a sort of data center in which a vendor provides an organization with equipment and space. The data center is frequently shared with other firms, lowering total costs without requiring the organization to pay construction expenses up front.

4.3.6 Device drivers and firmware Although the CompTIA Cloud+ certification exam focuses heavily on public cloud solutions, some topics are more relevant to private cloud infrastructures. Customers rarely have control over the device drivers or firmware used by physical hardware in a public cloud, for example (exceptions can include when customers lease the entire physical system for their use). Device drivers and firmware become more important in a private cloud environment, where the company employing the private cloud has control over the physical systems. This section will provide a brief overview of device drivers and firmware ideas. The actual optimization strategies employed will vary substantially depending on which device drivers (and devices) and firmware are used. The CompTIA test focuses on the distinctions between generic, vendor, and open-source device drivers and firmware.

Generic Although the CompTIA Cloud+ certification exam focuses heavily on public cloud solutions, some topics are more relevant to private cloud infrastructures. Customers rarely have control over the device drivers or firmware used by physical hardware in a public cloud, for example (exceptions © Secbay Press

233 | Page

Secbay Press can include when customers lease the entire physical system for their use). Device drivers and firmware become more important in a private cloud environment, where the company employing the private cloud has control over the physical systems. This section will provide a brief overview of device drivers and firmware ideas. The actual optimization strategies employed will vary substantially depending on which device drivers (and devices) and firmware are used. The CompTIA test focuses on the distinctions between generic, vendor, and open-source device drivers and firmware.

Vendor Original equipment manufacturer (OEM) device drivers and firmware, also known as vendor device drivers and firmware, are software programs created by the vendor that manufactures the hardware device. These software applications are often more suitable than generic software since they provide specific capabilities for the hardware and are more closely connected with the hardware's operation. There are several reasons why a generic device driver or firmware should be preferred over a vendor-supplied device driver or firmware: It is possible that the vendor will lock or disable a device function that can be unlocked with generic software. Vendor software is typically closed source, which means that the source code cannot be viewed. Consider using an open-source generic driver. The following section goes into greater information about open source.

Open source During the compilation process, software writers write code (called source code), which is then transformed into code that is executed by the hardware. Any software that allows you to access the original source code is considered open source. Closed-source software is any software that does not allow you to view the source code. Linux, for example, is available in both open-source and closed-source versions. RedHat Linux is a commercial and closed-source operating system, whereas SUSE Linux is an open-source operating system. Open-source software has several advantages, including: You can be properly aware of the actions that the software takes if you can see the code. You can find suspicious code that may cause problems or raise security concerns. In many cases, the organization that generated the open-source code allows you to adapt it to meet your needs. It should be noted that this is a licensing feature rather than a necessity of open-source software.

© Secbay Press

234 | Page

Secbay Press

4.4 In a given situation, employ appropriate automation and orchestration methods. 4.4.1 Infrastructure as code Consider the following scenario: your management instructs you to deploy some virtual machines (VMs) on your organization's cloud infrastructure so that the developers can test the code they are producing. Initially, the developers will want 100 virtual computers, each with comparable components, but some components in each virtual machine must be unique (such as user accounts, passwords, software installation, and storage space size). You are also advised that as the project proceeds, the developers will require additional virtual computers, all of which will be like one another but with minor differences. You begin to realize that simply generating these virtual machines may become a full-time job. You are concerned that, in addition to the time it will take to manually configure each virtual machine, you will make mistakes along the way.

Infrastructure components and their integration There are two main methods to IaC: declarative and imperative. The configuration file in declarative IaC systems contains a collection of data that defines the components of a resource. For example, if you're writing a declarative IaC configuration file for an AWS S3 bucket generated via AWS Cloud Formation, values like the kind of resource and bucket name would look like this:

4.4.2 Continuous integration/ continuous deployment (CI/CD) Many software providers follow a classic software release strategy in which software is released at regular intervals. For example, a business may have an annual release cycle, with each year bringing a new major release of software. Minor releases or bug repair releases may occur throughout the year, depending on any vulnerabilities or security holes detected in the software. The benefit of this technique of distributing software is that the organization usually has time to evaluate the software before it is released, resulting in a more stable program/product. The disadvantage of this release approach is that new features can take up to a year to be published, which may allow competitors to acquire more market share if their software is released more frequently. Continuous integration is a process that developers use (typically software developers). A version control application is used by a software program to utilize a main branch of development that is used when the software is released to clients. Developers do not work on the main branch directly, but rather on their own branch. When using the continuous integration process, developers merge changes in their branch back into the main branch as frequently as possible (not quite continuously, but that is the goal), and the changes are validated using some automated testing procedure. As a result, the primary release branch is constantly validated and updated.

4.4.3 Version control Different versions of source code are one of the most difficult problems that engineers face. Sometimes all you need to do is "revert" to a previous version of code (or roll back to a previous stable release). Manually maintaining these versions can be inconvenient, error-prone, and time-consuming. The fact that numerous programmers collaborate on a single piece of source code exacerbates the problem. A huge program may have tens of thousands of lines of code, with different programmers in © Secbay Press

235 | Page

Secbay Press charge of different sections of the code. The difficult chore of maintaining several versions of source code can be handled by version control software. Git is the most widely used version control system today. Git uses repositories to store the various versions of files generated by developers. A "central" repository typically contains all of the modifications committed to the main project. Furthermore, each developer has a local copy of this central repository on his or her own workstation. Here are some significant phrases in version control: Clone: A clone technique is used to generate a local repository from a central repository. As a result of this operation, the entire central repository is duplicated on the local system (including all branches). Branch: The main branch contains the project's most current release version. Typically, each developer generates a distinct branch that is a duplicate of the main branch (or another branch) now the developer's branch was formed. Additional branches may be created for special purposes, such as bug patches or the development of a new feature. Checkout: A developer utilizes a technique called a checkout to create a new branch. This procedure will generate a new branch with the name specified by the developer. This new branch is often a copy of the main branch at the time it was generated, however branches can be created from any other branch. Images Merge: A merge is the process of integrating changes made to a file in one branch with another branch's version of the same file. A merging may be conducted automatically in some situations, but if the changes between the two versions of the file are complex, a human may be required to manually perform the merge.

4.4.4 Configuration management Consider the topic of Infrastructure as Code to have a better understanding of configuration management. The goal of IaC is to deploy an instance, such as a virtual machine or storage device, using computerreadable configuration files and an IaC tool. Configuration management is a slightly different method in which existing instances are updated, frequently using a computer-readable configuration file and a configuration management tool. Configuration management can be used to create a user account, install software, or change network settings on a virtual machine, for example. Chef, Puppet, Ansible, and SaltStack are examples of popular configuration management software. The majority of big cloud companies also offer their own configuration management solutions.

Playbook The playbook is the configuration file used to configure the cloud resource in configuration management. As with IaC, some configuration management technologies, such as Chef and Ansible, use a procedural playbook, which offers instructions on how to complete the tasks required to configure the instance. Other configuration management technologies, such as Puppet and SaltStack, use a declarative language in which the desired end result is described.

© Secbay Press

236 | Page

Secbay Press

4.4.5 Containers Consider containers as a substitute for virtual machines. While virtual machines are an excellent solution in many circumstances, if you only need to execute a single application, a server less approach may be preferable. In many aspects, a container is like a virtual machine. It looks to have access to an operating system (unless it's a server less solution, which consumes fewer hardware resources than virtual machines or containers), yet it has a very small footprint. Because a container will share most of the operating system with other containers, it might be significantly smaller than a full virtual machine. Containers serve as versatile encapsulation units capable of containing various applications, ranging from web servers and mail servers to custom applications developed by your organization. Opting for containers over virtual machines offers several distinct advantages: 1. Cost-Efficiency: Containers are generally more cost-effective when deployed in a cloud environment. Their efficient resource utilization and lightweight nature make them an economical choice. You can achieve more with less, optimizing your cloud infrastructure investment. 2. Rapid Deployment: Containers can be launched swiftly, outpacing the deployment speed of traditional virtual machines. This rapid launch capability translates to faster provisioning of applications and services, which is invaluable in dynamic and responsive operational settings. 3. Scalability: Containers excel in terms of scalability. They can be effortlessly scaled up or down in response to shifting workloads, enabling you to meet fluctuating demand levels efficiently. This scalability feature empowers your organization to adapt promptly to evolving business requirements without the complexity associated with traditional virtual machines. 4. Resource Isolation: Containers offer a degree of isolation between applications running within them, ensuring that changes or issues in one container do not adversely affect others. This isolation enhances security and reliability. 5. Portability: Containers are highly portable, allowing you to develop applications in one environment and run them seamlessly in various cloud or on-premises environments. This portability promotes consistency and flexibility in application deployment. 6. Resource Efficiency: Containers share the host operating system's kernel, which reduces overhead and minimizes resource wastage. This efficient resource utilization translates into improved density and cost savings. 7. Version Control: Containers enable version control and easy rollback to previous application states, simplifying maintenance and updates. 8. DevOps Integration: Containers align well with DevOps practices, facilitating a continuous integration and continuous delivery (CI/CD) pipeline. They allow for streamlined development, testing, and deployment processes. In summary, containers offer a compelling choice for deploying a wide array of applications in cloud environments. Their cost-effectiveness, rapid launch capabilities, scalability, resource efficiency, and suitability for modern DevOps practices make them an attractive option for organizations seeking agility, © Secbay Press

237 | Page

Secbay Press efficiency, and flexibility in their cloud deployments.

4.4.6 Automation activities Automation is the process of carrying out actions in order to accomplish a desired result without the need for human intervention. Automation is a critical component of a successful cloud computing deployment since numerous jobs must be completed in a timely manner, which is impossible if they must constantly be performed by a human.

Routine operations In the cloud, many different sorts of normal processes can be automated. You could, for example, automate the process of creating resources as needed, such as storage devices. Alternatively, you could automate the collecting of metric data and have it delivered to a software application that will analyze it. Other normal operations that could be performed include the following: ● ● ●

Data backup for a vital system Changing firewall rules automatically Notifying users or customers about system updates or changes

Updates Consider the following scenario: your cloud infrastructure has dozens (or perhaps hundreds) of Linux virtual machines. Updates to each of those operating systems are required on a regular basis. You might use an automation approach to push updates to all operating systems without requiring any human input.

Scaling Scalability in cloud computing refers to a resource's capacity to adapt to changing needs. This is an important feature, and it is frequently cited as one of the strongest arguments for migrating onpremises systems to the cloud. Scaling can be accomplished through a variety of approaches, including auto-scaling, horizontal scaling, vertical scaling, and cloud bursting.

Shutdowns Having computational devices running, such as virtual machines and containers, incurs costs for the enterprise. If you are aware of compute devices that do not need to be running during specified times (such as weekends), you can automate their shutdown to save the organization money.

Restarts There are instances when it is necessary to restart a compute device. For example, if you install a new kernel on a Linux-based virtual machine, the operating system will require a reboot before the new kernel is activated. If you've automated the process of upgrading your organization's Linux virtual machines, you'll probably want to automate the restart process as well, especially if the updates contain a new kernel version.

© Secbay Press

238 | Page

Secbay Press Create internal APIs An application programming interface (API) is a means for two apps to communicate with one another. A client application sends an API request over the network, and the server responds by performing activities based on the client request. APIs are used to complete tasks by many of the systems with which you interact (even if you are not aware that APIs are being used). APIs can be generated automatically on the server side with the help of automation tools. This functionality is critical when new servers are automatically produced, such as during a scaling operation.

4.4.7 Secure scripting One of the difficulties of working with automation is that you will frequently need to write scripts (programs) to execute automation, configuration, and orchestration activities. These scripts could be standard language scripts, such as Python, or scripts customized to a cloud vendor.

No hardcoded passwords Although automation scripts do not require humans to run, they are likely to be executed by users who did not write the script for testing or in instances when they want to manually start a process. For example, if you have an automation script that creates a virtual machine, customers may want to use that script to create their own virtual machine. In most cases, these users will need to be able to see the script to execute it. As a result, no passwords should ever be included in the script. The script uses these passwords to get elevated access (see the section "Use of Individual Service Accounts" below), and if a user can read the password, that user may be able to manually gain this elevated status and do unlawful acts.

Use of individual service accounts When the script is run, it should use a service account (or service role) to carry out its tasks. This service account would have access to resources that an ordinary user, such as the one running the script, would not have.

Password vaults Because you don't want to store passwords within scripts, you can utilize a password vault instead. A password vault is a cloud-based service that stores passwords (and other secrets, such as keys) and makes them available to scripts or other processes as needed. The script can be customized to request the password vault to authenticate and do the necessary activities.

4.4.8 Orchestration sequencing Remember that Infrastructure as Code is the process of deploying an instance using a configuration file. This technique works great for deploying a single instance (or a group of similar instances) that is unrelated to other instances, but what if you need to deploy an entire system of instances that are related to one another? Assume you need to deploy a virtual machine that runs data-generating software. This software is then sent to a database, and the findings are made available through a website. This implies that three

© Secbay Press

239 | Page

Secbay Press instances must be deployed: a virtual machine, a database server, and a web server. In fact, for the entire system to function properly, you may need to deploy these instances in a precise order. Orchestration sequencing refers to the process of deploying instances that are related to one another in a specified order. You can use orchestration to deploy the database server first, then gather information about it (such as its IP address), and use that information to deploy the virtual machine so that the VM transmits data to the database. The orchestration process can then deploy the web server, and the web server can connect to the database to show the information.

4.5 Given a scenario, perform appropriate backup and restore operations. 4.5.1 Backup types When developing a backup strategy, it is critical to understand the various backup kinds. Some varieties offer faster backup times but take longer to recover. Other methods take longer to backup but give a simpler and faster restore process. In this part, you'll learn about the various backup types and their benefits and drawbacks.

Incremental All files that have changed since the last incremental or complete backup are included in an incremental backup. Incremental backups have several advantages: ● ●

Each backup is faster than a full backup and, in most situations, faster than a differential backup. Because incremental backups are typically less in size than full and differential backups, they require less storage space.

However, there are certain drawbacks: ●



It can take some time to restore from incremental backups. The entire backup must be restored first, followed by each incremental backup until the incident that caused the data loss. It can be difficult to locate a specific file from an incremental backup since the file may be in multiple locations.

Differential A differential backup saves any files that have been added since the last complete backup. Differential backups do not include files that have previously been backed up with differential or incremental backups. Differential backups have various benefits: ●

© Secbay Press

Because all the source data is contained in two backups (full and differential), it is easier to retrieve files than with incremental backups. 240 | Page

Secbay Press ●

Using a full backup to restore data is faster than using incremental backups.

However, there are certain drawbacks: ● ● ●

When compared to incremental backups, differential backups might be time-consuming. Differential backups typically take up more storage space than incremental backups. A differential backup will consume more bandwidth than an incremental backup if conducted remotely.

Full Everything from the source is backed up with a full backup. Files that haven't changed since prior backups may fall into this category. Full backups offer various benefits: ● ●

Images all the source data is contained in a single backup, making it easy to discover. Images Restoring from a full backup is faster than other kinds of backup.

However, there are certain drawbacks: ● ● ●

A complete backup might be a time-consuming process. Full backups typically require significantly more storage space than alternative approaches. If the backup is done remotely, a full backup will consume significantly more bandwidth than other techniques.

Snapshot Many backup tools backup a file system in the following manner: •

Make a note of the metadata for the files that are being backed up.



Make a note of the metadata for the directories that are being backed up.



Make a backup of the folders (which really is the list of files that belong in each directory).



Make a backup of the files' contents.

When live (mounted) file systems are backed up, this technique has a flaw. It is possible that modifications to the file system will occur between backing up the metadata and the file data. It is better to unmount the file system before backing it up, however on production workstations, this is not always practicable. As an alternative, you can use a snapshot. A snapshot is a "frozen image" of the file system that was created at the time the snapshot was created. You ensure a good (error-free) backup by backing up the frozen picture. Storage snapshots are a point-in-time copy of a storage volume or image that can be used as a backup to shorten recovery time objectives (RTOs) and recovery point objectives (RPOs). There are several variations of snapshots; the two most prominent are the copy-on-write and redirect-on-write snapshots. The copy-on-write snapshot is based on reserved storage capacity

© Secbay Press

241 | Page

Secbay Press that is required to store the snapshot of your data. When the snapshot is performed, it includes only metadata that contains information about where the original data is stored instead of storing the actual data. The primary advantage is that copy-on-write snapshots are immediate and have a minimal impact on storage operations. The snapshot will then track the storage volume by monitoring the storage blocks as changes are performed. When these blocks of storage data change, the original blocks of data are moved to the reserved snapshot storage area. This is done prior to the block of data being overwritten. The snapshot data is in sync with the time that the snapshot was taken. A copy-on-write snapshot requires an original copy of the data and stores changes to the reserved storage. The advantage of the copy-on-write method is that it is compact and uses minimal backup storage space. However, there will be a negative effect on storage write performance as the original volume must store the backup data to the copy-on-write backup storage space prior to accepting the write request. The redirect-on-write snapshot process takes a different approach. When a storage write operation takes place, the redirect-on-write snapshot will not store the new data over the existing block. Instead, it will create a pointer that tells the snapshot image where the original block is located on the original storage volume. These pointers allow a system accessing the snapshot to reference the data at the point in time the snapshot was taken and uses less storage I/O operations than a copy-on-write snapshot. This allows for a more efficient storage system with less overhead and a faster response.

4.5.2 Backup objects This section will go over the many sorts of data that you should consider backing up. The method used to back up data is determined by the type of data. The data object is another term for the type of data.

Application-level backup Many applications provide data that must be backed up individually. A web server, for example, contains web page files that may need to be backed up more frequently than the rest of the operating system. Most backup programs allow you to back up individual directories (folders) or files. In most circumstances, apps store all of their data in a single directory structure, but if the files are dispersed, you must modify the backup software to include all file and directory locations for the application data.

File system backup A file-system backup is one of the most popular forms of backup. This type of backup saves not only the data (files), but also the data's structure and metadata. The data structure is the directory or folder structure of the data. The metadata of a file is anything other than the file's content or name. Metadata examples include the following: • • • © Secbay Press

Ownership of a file Type of file Permissions 242 | Page

Secbay Press • •

Date/timestamps Location of the data block

Database dumps When a utility like MySQL dump is used to duplicate the database, either to produce a backup `or to populate a second instance of the database, a database dump is formed. It is critical to understand that a database dump is unique to the relational database management system (RDBMS), hence a database dump from one RDBMS cannot be restored to another. A migration tool is required instead. A database dump can also be used to backup only the database's schema or metadata. For example, you might wish to construct a test database that contains the format of an existing database but not the contents.

Configuration files Configuration files on an operating system are used to configure either applications or operating system functions. On Linux-based systems, for example, the /etc/passwd and /etc/shadow files are used to configure user accounts. While these files will be stored up as part of file-system backups, they frequently require more frequent backups. An operating system firewall, for example, may have dynamic rules produced anytime the system detects a hacking attempt. These rules may be added at any time during the day. If you rely on a file-system backup to back up the file system once a week, say on Sunday evening, several of these rules may be broken if the file system needs to be restored in the middle of the week.

4.5.3 Backup targets Another key factor to consider when developing a backup strategy is where you direct the data backup. This section will go over the benefits and drawbacks of backing up to tape, disk, or object storage.

Tape For decades, tape devices have been used to store data (in fact, magnetic tape was invented in 1928 to record sound). Tapes are an excellent backup media since they are both inexpensive and dependable. They are also an excellent medium for long-term storage. However, there are several drawbacks to using tapes. They are slower than other types of devices, such as disk drives. As a result, the backup process is often slower. They also have less usable storage per tape, which means that many tapes may be necessary to accomplish a backup or restore. Tapes in the tape drive are replaced out with a robotic arm in many modern cloudbased data centers that utilize tape devices, but if this capability isn't accessible for the data center, the manual procedure of replacing tapes is difficult and labor demanding.

© Secbay Press

243 | Page

Secbay Press Disk Disk backup devices are typically compared to tape backup devices. When comparing the two, keep in mind that disk devices are frequently far faster than tape devices and can nearly always carry significantly more data. Disk devices, particularly SSD drives, are also more expensive than tape devices. Disk drives are excellent at swiftly locating and recovering specific bits of data. On disk backups, the verification procedure (where the backup data is validated to be error-free) is faster and more efficient. Disk backups are also more scalable, albeit at a larger expense.

Object Because objects are often saved on disk devices, they share some of the same benefits and drawbacks as disk backup devices. Object-based backups, on the other hand, have certain distinct advantages. One advantage is that object storage typically allows for the adoption of a retention schedule. Most object storage solutions enable old data to be transferred to less expensive storage alternatives and, eventually, totally erased. While many organizations have shifted to disk or cloud storage for their primary data storage needs, tape remains a prevalent choice for backup and recovery purposes. The enduring popularity of tape is attributed to its durability and portability, allowing it to securely store vast volumes of data offline and ensure the long-term stability of archives. AWS Tape Gateway presents an innovative solution that enables the transition from physical tape usage on-premises to virtual tape utilization within the AWS cloud, all without necessitating alterations to existing backup workflows. Tape Gateway seamlessly integrates with leading backup applications and maintains cached virtual tapes on-premises for rapid data access with minimal latency. It further enhances security by encrypting data during transmission between the gateway and AWS, while also applying data compression techniques. The gateway effectively manages the transition of virtual tapes between Amazon S3, Amazon S3 Glacier, or Amazon S3 Glacier Deep Archive, optimizing storage costs in the process. Managing tape media, associated costs, third-party offsite contracts, and grappling with the exponential growth of data pose considerable challenges for organizations relying on tape backup. AWS Storage Gateway, through its Tape Gateway configuration, provides an alternative to physical backup tapes that seamlessly aligns with existing backup procedures. This solution combines the local performance advantages of disk storage, the cost-effectiveness and scalability of the AWS cloud backend, and minimal disruption to current systems, delivering a comprehensive and efficient backup strategy. Creating Object-Level Backups Utilize object-level backups to perform backups of individual schemas or tables within your database. Object-level backups prove particularly valuable in scenarios involving multi-tenanted database environments. For instance, consider an international airport's database, which is designed to represent various airlines within its schemas. Within these schemas, tables may contain diverse types of information related to each airline, such as ARRIVALS, DEPARTURES, and PASSENGER data. In such a setup, creating object-level backups for specific schemas © Secbay Press

244 | Page

Secbay Press allows you to restore data at the granularity of individual airline tenants or other critical data segments. To create one or more object-level backups, you can establish a configuration file that outlines the backup destination, the name of the object-level backup, and a list of objects to be included in the backup. You have the flexibility to use parameters like includeObjects and excludeObjects, along with wildcard functionality, to precisely specify the objects you want to include in the backup. For more detailed guidance on specifying the objects for inclusion, you can refer to the documentation on Including and Excluding Objects Using Wildcards.

4.5.4 Backup and restore policies Backups at random offer no genuine commercial function. It is critical to have a well-planned backup strategy in place, as well as policies in place about how and where backups are saved. This section will go over these subjects as well as restore policies that you should develop for your company.

Retention The concept of retention is to specify how long backups should be kept before being removed. While holding backup data indefinitely may appear to be a good idea, storing large volumes of data is expensive. How long data should be kept? There is no single rule that applies to all situations. When developing a retention policy, keep the following elements in mind: • • • •

What is the importance of the data to the organization? When does old data no longer serve any purpose? What are the expenses of maintaining old data? Is there any compliance legislation that must be followed when storing outdated data?

You should begin by categorizing data. These categories will include data that is subject to compliance rules, data that is confidential or proprietary in nature, and data that is essential for business purposes. The actual categories you design are entirely up to you (save for compliance regulations and any restrictions established by your organization's board of directors). After categorizing the data, you must consider the following: • • • •

How long will the data be kept? How and where will it be kept? How will data be destroyed when it reaches the end of its useful life? How quickly can the data be recovered?

Schedules A backup schedule specifies how frequently and what type of backup you will perform. Consider the following weekly backup strategy for a file system: The frequency with which you plan backups will be determined by factors such as how frequently the data changes. The preceding backup schedule, for example, would fail if substantial modifications were made to the file system every day. A daily backup or backups conducted © Secbay Press

245 | Page

Secbay Press throughout the day would make more sense in that instance. Backup Schedules Scheduling The backup window is the time available for the backup operation to run while the source storage system is either offline or lightly used. The backup applications allow you to define when the backup operation takes place and what type of backup is to be performed. As an illustration, you can set up a comprehensive backup plan where a full backup is scheduled every Sunday, spanning from 1 a.m. to 5:30 a.m., while incremental backups on the other days of the week occur between 1 a.m. and 2 a.m. This scheduling strategy must consider both the time required for the backup process to finish and factors like data volume, data transportation across a network, and data storage media write times.

Location Keep in mind that you want to keep your backups secure. This entails ensuring that the data is both secure against theft and secure from damage. As a result, storing backups in a secure location is a vital component of a backup strategy. In addition to security, you should examine where you will store your backups in relation to where the original data will be stored. Assume you are backing up servers in your data center and you opt to store your backups in the server room. If a fire breaks out in the server room, you risk losing not just your original data, but also all backup data. As a result, many firms explore keeping backup’s offsite, such as in the cloud.

SLAs If you choose a cloud vendor to manage your backups, you should be aware of the backup service-level agreement. The SLA should include provisions for the uptime % of backup storage, which can vary depending on whether geo redundant storage is employed. Consider the following SLA for Azure Backup as an example of a cloud storage service SLA:

Recovery time objective (RTO) Assume the data on an instance has gotten corrupted, and you have been alerted that you must recover this data using backup data. How long should the grieving process take from the time you become aware of the loss? This is how the recovery time objective is defined. After the recovery is complete, you must document how the recovery process went, including whether or not the process took longer than the RTO number. This information is especially relevant for any business-critical data or data that is subject to regulatory compliance oversight.

Recovery Point Objective The Recovery Point Objective (RPO) signifies the specific restore point that you aim to recover to in the event of an outage. In essence, the RPO represents the extent of data that could potentially be lost when resuming operations following a disaster. An example to help you understand the RPO is that if you have a database storing sales from your online e-commerce site and it is set to create a backup every two hours, then the RPO would be a maximum of two hours. If you had a disaster strike and as part of the recovery process you © Secbay Press

246 | Page

Secbay Press switched over to a backup site, the RPO would define how fresh the data would be at the new site. If the site is used for financial or other critical transactions, such as healthcare, the RPO would need to be close to zero since losing any data could be catastrophic. However, if you had an outage on a noncritical system where losing recent data would have no impact, then the recovery point could be longer and not have any lasting negative effect on the business. When performing business continuity planning, the RPO plays a critical role in the design of the cloud computing architecture. Recovery Time Objective The recovery time objective (RTO) is the amount of time a system can be offline during a disaster; it is the amount of time it takes to get operations back up and operational after a failure. When planning for business continuity, the cost of the downtime must be taken into consideration. Ask yourself what the impact to business operations will be if the cloud operations or a section of them were to fail. If the site is an active e-commerce operation during the busy holiday selling season, the outage can cost the company a large amount of lost revenue each minute the site cannot be reached by customers. In this case, the RTO would need to be a short amount of time. However, if the failure is a noncritical system that has no impact on operations, the RTO can be days or even weeks to bring the systems back online. As you can see, the RTO value will be a critical metric for your cloud business continuity design and operations. When working with a BIA, experts often attempt to predict the possibility of failure. For example, what is the likelihood that a hard disk within a RAID configuration will fail? The following two terms are often used to predict potential failures. The Mean Time Between Failures (MTBF) serves as a crucial metric for assessing the reliability of a system, and it typically represents the average duration between system failures. This metric offers valuable insights into a system's ability to operate consistently and without interruptions. A higher MTBF value indicates greater reliability and a longer interval between failures, signifying that the system can perform reliably over an extended period. MTBF calculations involve gathering historical data on system failures and analyzing the time intervals between these failures. By doing so, organizations can gain a deeper understanding of their system's performance, identify areas that require improvement, and implement measures to enhance reliability. A high MTBF is particularly important in critical systems where downtime can lead to significant disruptions, financial losses, or safety hazards. Industries such as aviation, healthcare, manufacturing, and data centers prioritize high MTBF values to ensure continuous and dependable operations. Furthermore, MTBF is often used in conjunction with other metrics like Mean Time to Repair (MTTR) and Availability to create a comprehensive picture of a system's reliability and resilience. By regularly monitoring and optimizing MTBF, organizations can minimize downtime, improve customer satisfaction, and enhance overall operational efficiency. Mean time to recover (MTTR). The Mean Time to Recover (MTTR) is a crucial metric that quantifies the average time it takes to restore a system following a failure. Essentially, MTTR is synonymous with the Mean Time to © Secbay Press

247 | Page

Secbay Press Repair, and both terms convey the same fundamental concept. Organizations that engage in maintenance contracts often stipulate the MTTR as a key component of the contract agreement. In such arrangements, the supplier commits to, on average, restoring a failed system within the specified MTTR timeframe. It's important to note that while MTTR sets an average restoration time, it doesn't guarantee that every restoration will occur precisely within this timeframe. Variations may occur, with some incidents taking slightly longer to resolve and others being resolved more quickly, ultimately balancing out to the average defined by the MTTR. MTTR plays a critical role in ensuring system reliability and minimizing downtime, which is particularly significant for businesses that rely heavily on their systems' continuous operation. By setting and monitoring MTTR benchmarks, organizations can proactively manage and enhance their system recovery processes, improving overall operational efficiency and customer satisfaction.

3-2-1 rule • • • •

321 Backup- time honored and tested strategy Three different copies of the data (production data+2 backups) Two different forms of media (example disk and tape) One copy offsite

The 3-2-1 backup rule is a widely recognized and venerable backup strategy that has stood the test of time. It provides a structured approach to data protection and disaster recovery. The core principle of the 3-2-1 backup strategy is straightforward: Ensure that you maintain three copies of your data. This includes your primary or production data and two backup copies. These copies should reside on two distinct types of media, typically involving disk storage and tape backup. Additionally, one of these copies should be stored off-site, serving as a crucial safeguard for disaster recovery scenarios. This rule is grounded in several key benefits and considerations: •









Data Redundancy: By having three copies of your data, you significantly reduce the risk of data loss due to hardware failures, data corruption, or accidental deletions. Redundancy enhances data availability and minimizes downtime. Media Diversity: Storing data on two different types of media, such as fast-access disk storage and reliable tape backups, offers a balance between speed and durability. Disk storage provides quick access to data, while tapes offer long-term retention and offline protection. Off-Site Backup: Keeping one copy of your data off-site is a crucial aspect of disaster recovery. It safeguards your data in the event of catastrophic incidents like fires, floods, or physical damage to your primary site. Compliance and Best Practices: The 3-2-1 backup rule aligns with industry best practices and compliance requirements in various sectors. It is considered a robust strategy for data protection. Scalability: This strategy can scale to accommodate growing data volumes. As your data needs increase, you can expand your backup infrastructure while maintaining the 3-2-1 principle.

© Secbay Press

248 | Page

Secbay Press •

Data Integrity: Regularly testing and verifying your backup copies ensure data integrity and reliability. Scheduled backups and verification processes are integral to the success of the strategy.

In summary, the 3-2-1 backup rule is a well-established and effective approach to data backup and disaster recovery. It provides a solid framework for organizations to safeguard their data, ensure business continuity, and adhere to best practices in data protection.

4.5.5 Restoration methods A restoration procedure is the process through which data is restored. This section will teach you about many methods to consider when establishing a backup strategy.

In place The data is restored in its entirety, back to its original position, with an in-place restoration. This sort of restoration is typical when the loss of a whole file system or database has a negative impact on business operations and the restoration must be completed as soon as feasible. However, there are several disadvantages to executing an in-place restoration. A corrupted backup or an out-of-date backup may cause further issues following the restoration. If the in-place approach is used to partially recover data, it may end up overwriting valid and up-to-date data. In these circumstances, it makes sense to investigate a different location, which is described in the following section.

Alternate location The data is restored at a different location from where the original data was stored using an alternate location restoration. This method allows you to examine the data to check that it is complete, not corrupted, and has timestamps that suggest it is recent enough to be regarded excellent data. After the data has been confirmed, it can be copied back to its original place. The most significant advantage of this technique is data verification. However, this process can take time, which may cause further problems because the data will be unavailable for business reasons for an extended length of time. If speed is critical, the in-place technique described in the prior section should be considered.

Restore files In other circumstances, restoring everything might not be necessary. For example, if a user deletes a file by mistake, only the file must be retrieved, not the entire backup. As a result, instead of restoring the full backup, you should have a mechanism for recovering individual data available.

Snapshot Backups may be performed by the cloud provider as part of its service offerings and as a managed service. Cloud customers may also want to implement their own backup strategy to meet their specific needs. © Secbay Press

249 | Page

Secbay Press There are many different types and variations of backups to meet the use cases and objectives you may need to meet. I will cover the most common backups that are objectives for the Cloud+ exam. Image Backups Image backups are comprehensive duplicates of entire hard drive volumes, capturing everything from the operating system and software applications to user data and system configurations. This technology goes by various names, including disaster backup, cloning, ghosting, image backups, and block-level backups. Image backups play a pivotal role in ensuring data availability, disaster recovery preparedness, and system maintenance. Here's a more detailed exploration of image backups: 1. Complete System Snapshot: An image backup captures the entirety of a hard drive volume, making it an exact replica of the source. This includes the operating system, installed applications, user files, and system settings. It essentially freezes the system's state at the time of the backup. 2. Disaster Preparedness: Image backups are a crucial component of disaster recovery planning. In the event of VM loss or catastrophic failures, these backups enable organizations to swiftly recover entire systems, minimizing downtime and data loss. This is particularly essential for mission-critical systems and applications. 3. Rollback Capabilities: Image backups serve as a reliable mechanism for rolling back system states to a known-good configuration. This is beneficial when updates or changes have caused issues, or when a server fails to boot or becomes corrupted. Restoring from an image backup can quickly revert the system to a stable state. 4. Bare-Metal Recovery: Image backups facilitate bare-metal recovery, allowing you to restore an entire system to new or replacement hardware. This is advantageous when dealing with hardware failures or migrations to different server infrastructure. 5. Block-Level Backups: Image backups operate at the block level, capturing changes made at the block or sector level on the source drive. This granularity ensures efficiency and minimizes storage requirements compared to traditional file-level backups. 6. Incremental and Differential Backups: Image backup solutions often offer options for incremental and differential backups. These techniques enable the capture of changes made since the last backup, reducing backup times and storage needs. 7. Snapshot Consistency: Advanced image backup tools ensure the consistency of snapshots to avoid data corruption or incomplete backups. 8. Management and Automation: Modern image backup solutions provide management interfaces and automation features, allowing for scheduled backups, retention policies, and monitoring of backup health. In summary, image backups are a versatile and critical component of data protection and system recovery strategies. They provide a comprehensive solution for safeguarding entire systems, preparing for disasters, and maintaining system integrity, making them invaluable in both routine © Secbay Press

250 | Page

Secbay Press maintenance and disaster recovery scenarios. Backup Targets The backup target is defined as the endpoint or storage system where the backup data is to be stored. Backup targets can store data as a live copy to be used by your applications like with replicas or can store data as a local storage array that can perform the function of a backup data store; alternatively, they can store data as a secondary system that can support a copy of your live data. Remote backups address the need for geographic separation of your backup data from your production systems. Replicas Replicas serve as essential redundancy and data protection measures by creating backup copies of critical data. These copies can be stored either locally within the same data center or remotely at geographically distinct locations, offering an additional layer of security and availability. Replicas are designed to act as alternative data stores separate from your primary production operations, and they play a pivotal role in enhancing data resilience and disaster recovery preparedness. Expanding on this concept: The Significance of Data Replicas 1. Data Resilience: Replicas are instrumental in safeguarding data against various types of threats, including hardware failures, data corruption, and accidental deletions. By maintaining duplicate copies, organizations reduce the risk of data loss and maintain continuous access to critical information. 2. Disaster Recovery: Replicas, especially those stored in remote or off-site locations, play a central role in disaster recovery strategies. In the event of a catastrophic event such as a fire, flood, or cyberattack, these off-site replicas ensure that data remains intact and can be rapidly restored, minimizing downtime and business disruption. 3. High Availability: Local replicas can be used to enhance high availability within a data center. By distributing data across redundant storage systems or servers, organizations can ensure uninterrupted access to data even in the event of hardware failures or maintenance activities. 4. Load Balancing: Replicas can be strategically employed to balance the load on data systems. By directing read operations to replicas, organizations can offload traffic from the primary data store, optimizing performance and responsiveness. 5. Geographic Redundancy: Replicas stored in geographically diverse locations offer protection against regional disasters and provide disaster recovery capabilities at a larger scale. This geographic redundancy is particularly crucial for organizations with global operations. 6. Data Consistency: Advanced replica mechanisms often provide options for maintaining data consistency between the primary data store and replicas. This ensures that all copies of data remain synchronized and up to date. 7. Scalability: As data volumes grow, replicating data becomes a scalable solution for accommodating increased storage needs. Organizations can add additional replicas as required © Secbay Press

251 | Page

Secbay Press to support data growth. In summary, data replicas are indispensable components of modern data management and disaster recovery strategies. They deliver data resilience, enable rapid recovery from disasters, enhance high availability, and support load balancing efforts. By strategically implementing and managing replicas, organizations can ensure data integrity and availability in the face of various challenges and disruptions. For example, with many popular databases, you can create one or more read replicas that can be stored in local or remote data centers. These replicas are updated in real time from the master, and the database can access them instead of the master replica. This allows for the database to scale and operate at very high read rates. Other applications such as caching and content delivery services rely on replicas stored locally that were downloaded from a distant source location for enhanced performance. Operating systems, storage management software, applications, and virtualization management systems offer storage replication where data that is stored in a local volume, LUN, or array is automatically replicated to another system for availability and redundancy purposes. When a file is stored in the primary storage system, the replication software will automatically store the same file in another location, or replica. Local Backups Local backups are just as they sound; they are data in a local data center that is stored on its local primary storage array, and when a backup operation is performed, the data is backed up locally. The advantage of local backups is speed. High-speed Fibre Channel or Ethernet networks can be taken advantage of for fast backup operations. The disadvantage is that if something happens to that data center, all of your data could be compromised since the backup and primary data are all in the same location. File Backups File backups, are what people normally think of when backing up a server or personal system. File backups are the storage of folders and files that you select with your backup software to copy to another storage location for later access if needed. File-level backups are generally smaller in space requirements, are more efficient, and have more options during restorations than complete image backups. Remote Backups Remote backups are a preferred approach since they have the advantage of geographical separation. Many corporate and most regulatory requirements will specify that the backup data be located at a separate data center from the origin data center and that the two locations are geographically some distance apart from each other. Many cloud providers interconnect their data centers into regions and availability zones using high-speed, directly connected, fiber networks that allows large backup sets to traverse the data network between the data centers and makes remote backups feasible.

© Secbay Press

252 | Page

Secbay Press

4.6 Given a scenario, perform disaster recovery tasks. 4.6.1 Failovers • • • • •

Failover- designed to avoid Single Points of Failure Example: Netflix Chaos Monkey Can be manual or automatic. Sequential or load balanced Can setup multiple server pools

Eliminating Single Points of Failure (SPoF) is among the most challenging yet vital tasks when designing your cloud architecture. This principle entails ensuring that every mission-critical service within your infrastructure can withstand the failure of any individual server or component. Some organizations, exemplified by Netflix, have adopted an extreme approach to this concept, introducing a tool known as "Chaos Monkey." Chaos Monkey is purposefully designed to introduce controlled chaos by randomly disrupting servers within an environment. Its primary objective is to compel developers and architects to create code and architectural solutions with server failures as a fundamental consideration. By subjecting their systems to the unpredictability of server outages, organizations like Netflix aim to foster resilience, redundancy, and fault tolerance in their cloud-based applications and services. More information is available at https://netflix.github.io/chaosmonkey/. Expanding on this concept: The Significance of Eliminating Single Points of Failure Enhanced Reliability: Eliminating SPoFs is a cornerstone of building highly reliable systems. It ensures that a failure in any single component does not result in system-wide outages or disruptions, thereby enhancing service availability. Fault Tolerance: Architecting systems to withstand individual failures promotes fault tolerance. In the event of hardware failures, software glitches, or unexpected issues, the system can continue to function, reducing the impact on end-users. Scalability: Cloud architectures designed without SPoFs are typically more scalable. This scalability arises from the ability to distribute workloads across multiple servers or instances, allowing systems to handle increased traffic or data loads. Resilience: The intentional introduction of chaos, as exemplified by Chaos Monkey, fosters resilience. It encourages the development of self-healing mechanisms and automated recovery procedures, ensuring system stability in dynamic cloud environments. Continuous Availability: For organizations providing services around the clock, eliminating SPoFs is essential for maintaining continuous availability. It minimizes downtime and ensures that customers can access services without interruption. Security: SPoFs can also be exploited by malicious actors. By removing single points of failure, organizations can enhance their security posture by reducing vulnerabilities and potential attack vectors. © Secbay Press

253 | Page

Secbay Press In conclusion, the practice of eliminating Single Points of Failure is fundamental to building robust, reliable, and scalable cloud architectures. Embracing chaos engineering principles, as seen with Chaos Monkey, encourages a proactive approach to fault tolerance and resilience, ultimately benefiting both system performance and end-user satisfaction. •

While the concept of eliminating Single Points of Failure (SPoFs) is commendable, it's worth acknowledging that many companies may face resource constraints that hinder them from implementing and maintaining such a system. Typically, the initial step towards SPoF mitigation involves the implementation of failover IPs on critical services. The complexity of this task can vary significantly depending on the specific service or component. For instance, setting up failover IPs for a load balancer can be relatively straightforward, while doing the same for a database can pose more significant challenges.



So we can concentrate on a practical and straightforward example of IP failover within a cloud context. Our objective will be to establish two virtual machines (VMs) configured with a floating public IP that can seamlessly transition between them. This setup ensures that if one VM experiences an outage, the other one will automatically take over.



It's essential to note that while IP failover offers advantages such as minimal DNS changes and propagation times, it may incur slightly higher costs compared to DNS Failover. However, this trade-off can be worthwhile for organizations seeking a quick and efficient failover solution without the need for DNS adjustments and internet propagation delays. • • • • •

Ideally suited for content where server synchronization is important Allows for manual or automatic failback Allows for sequential or load balanced mode Provides primary and secondary server ‘pools’ (instead of just a single IP) Allows multiple pool members to be weighted for precise traffic distribution

4.6.2 Failback •

Moving your IT environment back to primary production after a disaster.



Several important considerations •

The time needed to acquire replacement hardware.



The method used to failover to your disaster recovery (DR) site (typically be the same process followed to failback)



Expenses related to converting your disaster recovery (DR) site into a permanent facility include hardware costs, software expenses, facility expenditures, failover declaration fees, and associated fees.



Your recovery time objective, or how long the failback method will take - this can sometimes be more painful than the actual failover event.

Failback refers to the process of returning your IT environment to its primary production state following a disaster. Understanding your failback alternatives is crucial because failover is not a permanent © Secbay Press

254 | Page

Secbay Press condition. First, your company likely must adhere to compliance regulations, so take these into account when designing your failover site. PCI DSS, for example, no longer allows a customer to run applications in a disaster recovery environment that doesn't have an identical level of security as the normal infrastructure. You may need to add security measures to the DR site (like file integrity scanning and security event long management) to resume processing cardholder data. The next issue is the application stack. Many software products, such as Microsoft's SQL Server, have licensing limits to address prior to running production workloads for an extended time. The third issue is capacity. Suppose your IT business continuity plan was designed to operate at 20% of your production capacity. The longer you remain in your failover site, the greater the chance that you will run up against your resource limit. Finally, there are limitations if you leverage a third-party company to assist in your data backup and recovery plan. Most restrict how long you can occupy the space they provide before you must either transition your environment back to its original site or convert your contract with the vendor to make the site permanent. In addition, consider: •

The time needed to acquire replacement hardware.



The method used to failover to your disaster recovery (DR) site (typically be the same process followed to failback)



Costs (hardware, software, facility, failover declaration fees and fees) associated with turning your DR site to a permanent site.



Your recovery time objective, or how long the failback method will take - this can sometimes be more painful than the actual failover event.

Failover site selection and failback capability The selection of a failover site will also impact the possibilities for your failback strategy, whether it involves restoring existing infrastructure, acquiring new hardware, or transitioning to a production cloud environment. Colocation: Leveraging colocation services for failover with subsequent failback to existing infrastructure is a costeffective option, albeit one that may require substantial labor and time investments. The restoration of significant data volumes, especially exceeding five or 10 terabytes, can extend to several days when retrieved from tape backups. This strategy becomes less effective when new infrastructure is required. Capital expenditure costs run high, as do labor costs. Buying and configuring new hardware while simultaneously restoring five to 10 terabytes of data, all within a tight recovery window, is going to be a serious challenge. Public cloud:

© Secbay Press

255 | Page

Secbay Press A public cloud service offers an easy option for storing data during a DR event. With low front-end costs, it's great for small businesses with limited IT staff or that lack in-house data-protection services. But if you're a mid-sized company facing a disaster, prepare yourself. Back-end costs can run very high here. Retrieving data is expensive, given the scope and scale of what you likely have stored there. You will pay for every gigabyte you take out. Disaster Recovery as a Service (DRaaS): In a Disaster Recovery as a Service (DRaaS) solution, the failover process involves transitioning to a predetermined DR cloud environment. This transition is typically seamless and reliable because your DRaaS provider collaborates with you to design the solution and rigorously tests it to identify and address any potential issues. Expanding on this: The Reliability of DRaaS Failover: 1. Design Collaboration: When implementing a DRaaS solution, you work closely with your service provider to design a failover strategy that precisely aligns with your business needs. This collaborative effort ensures that all critical applications and data are accounted for in the failover plan. 2. Testing and Validation: Prior to deploying the DRaaS solution in a live environment, extensive testing and validation are conducted. These tests simulate various disaster scenarios to ensure that failover procedures are effective and can be executed smoothly. 3. Issue Resolution: Testing uncovers any issues or hiccups in the failover process. These issues are meticulously addressed and resolved in collaboration with your DRaaS provider. The goal is to eliminate any potential obstacles to a successful failover. 4. Documentation: A comprehensive failover plan is documented, including step-by-step procedures and dependencies. This documentation serves as a reference during the failover event, ensuring that the process is executed accurately. 5. Smooth Transition: When a disaster or outage occurs, the failover process is initiated, and your IT environment seamlessly transitions to the designated DR cloud. This ensures minimal disruption to your operations and helps maintain business continuity. 6. Monitoring and Support: Throughout the failover event, your DRaaS provider often offers continuous monitoring and support to address any unforeseen challenges and ensure that your systems are functioning as expected. 7. Recovery Time Objectives (RTOs): DRaaS solutions are designed with specific Recovery Time Objectives (RTOs) in mind. These objectives define the acceptable duration for failover and recovery processes, allowing you to plan for downtime accordingly. 8. Scalability: DRaaS solutions can be scaled to accommodate the evolving needs of your business. As your data and application requirements change, your DRaaS provider can adjust the failover environment accordingly. © Secbay Press

256 | Page

Secbay Press In summary, DRaaS solutions provide a reliable and meticulously planned failover process. The collaborative design, thorough testing, and issue resolution ensure that when a disaster strikes, your transition to the DR cloud is smooth and efficient, enabling you to maintain essential business functions with minimal disruption. DRaaS solutions commonly rely on continuous data protection technologies, which ensure minimal data loss by continuously replicating production data. During the recovery phase, mission-critical operations can continue to function smoothly, as server security configurations and network services are mirrored at the DR site. When it comes to failback in DRaaS, the available options are consistent with those offered by colocation and public cloud services. These options include restoring existing infrastructure, procuring new hardware, or transitioning to a production cloud environment. Regardless of the chosen failback approach, the responsibility for the labor-intensive process typically falls on the service provider, thereby saving you valuable time during critical recovery efforts. The failback event itself is simpler too. It requires only three steps: •

Restore your infrastructure (or maintain production in the cloud)



Reload your hypervisor



Install one virtual machine

From here, your DRaaS provider can take over the replication and eventual failback into your production environment. Additional benefits include: •

Minimized downtime, protecting your company from financial losses.



Lessened risk of compliance penalties



Stronger security against breaches. Oftentimes, a company's DR site holds all its data, but without the security in place to protect it, the DR servers are often neglected and lack critical security patches.



In a DRaaS solution, the Disaster Recovery Virtual Machines (VMs) remain synchronized, typically in a powered-off state, making the data inaccessible until a declaration is made.

However, it's essential to recognize that not all disaster recovery service providers offer the same capabilities. Some may have production capacity readily available, while others may not. It's advisable to inquire in advance and clarify these details. Your failback strategy is of equal importance to your failover plan. Questions surrounding the duration of the failback process, the required labor, the simplicity of data transport, and the associated costs for returning to your primary production site or identifying a new one should not be left unresolved during a crisis. It's crucial to address these aspects proactively to prevent potential disasters later, especially when company resources may be constrained, and energy levels may be diminished during the recovery period.

© Secbay Press

257 | Page

Secbay Press

4.6.3 Restore backups •

Procuring Backup equipment



Restoring services when an IT center goes offline



Additional hardware and time for replication



Additional workloads

There may be a requirement for additional equipment depending on your DR design. If you are leasing data center space only, then you must incur all capital expenses for the IT equipment and software required for the backup site. This can be a significant expense because a complete, or nearly complete, image of your production network would need to be procured and implemented. The equipment required would include all servers, storage, network, software, and any other hardware or software needed to restore services. This is usually not very economical or easy to keep up to date as it requires that the equipment sit idle and unused; also, as your primary site undergoes constant maintenance upgrades, you would need to address the requirement of keeping your offline backup gear up to date, which needs to be stored at a remote location and may not be online. Also, as you implement your DR plan, the issue of restoring operations services if your company’s IT operations center goes offline must be addressed. You must plan not only for the data center operations being disrupted but also operations. Additional hardware and software may be required to replicate a network operations center at another location. When procuring backup equipment, care must be taken to purchase and implement gear that can handle the anticipated workload. It may be temping, for example, to purchase a smaller server or router model since it is intended to be used only for disaster recovery. However, the risk is that when the servers go online, they may not have the capacity to handle the workload.

4.6.4 Replication •

Replication-transfer and synchronization of data between multiple data centers



Data must be transferred or replicated between servers



Synchronous Replication/Asynchronous Replication



Virtualization in the cloud allows complete images to be replicated so servers can be more easily remotely replicated

Replication is the transfer and synchronization of data between multiple data centers. For disaster recovery purposes and data security, your data must be transferred, or replicated, between data centers. Remote copies of data have traditionally been implemented with storage backup applications. However, with the virtualization of servers in the cloud, you can now replicate complete VM images, which allows you to replicate complete server instances, with all the applications, service packs, and content, to a remote facility. Later in this chapter, I will discuss synchronous and asynchronous replication approaches

© Secbay Press

258 | Page

Secbay Press and the advantage of each implementation. Applications such as databases have replication processes built-in that can be utilized based on your requirements. Also, many cloud service offerings can include data replication as an included feature or as a chargeable option. Synchronous replication and asynchronous replication. Most synchronous replication offerings write data to both the primary storage system and the replica simultaneously to ensure that the remote data is current with local replicas. The second type of replication is asynchronous, where the data is written to the primary first and then later a copy is written to the remote site on a scheduled arrangement or in nearly real time. Synchronous replication is the process of replicating data in real time from the primary storage system to a remote facility. Synchronous replications allow you to store current data at a remote location from the primary data center that can be brought online with a short recovery time and limited loss of data.

4.6.5 Network configurations •

Network Connectivity to the ISP or WAN provider



Ensure additional capacity is available in the event of a disaster recovery situation.



Common for public cloud vendors to offer edge sites at locations to allow them to have a worldwide presence



Edge facilities- provide fast, low latency connections.

Addressing Network Connectivity Earlier in this chapter we discussed the issues, concerns, and requirements of network connectivity from both the cloud service provider and the ISP or WAN provider. Network connectivity requirements need to be investigated, and your current facilities upgraded, if necessary, to consider your business continuity requirements. The continuity plan should require that your network connectivity partners have the needed capacity and an SLA that outlines specifically what they guarantee to offer you in the event of a disaster recovery operation. Care must be taken to ensure that you are prepared for peak data usage and any additional network traffic beyond your normal operations such as data replications taking place.

4.6.6 On-premises and cloud sites Hot The site is always accessible for fast failover in a hot standby site. This sort of standby site has the highest availability and is utilized when downtime must be avoided at all costs. A hot site has the following drawbacks:

© Secbay Press

259 | Page

Secbay Press ● ● ●

The replication process may be complex and time-consuming to handle in order to reproduce the active site as closely as feasible. The hot site must be administered, which is usually done by a professional support crew. The expense of keeping a hot site is more than the cost of maintaining a warm or cool site.

If the hot site is on-premises, the company must supply all hardware components as well as a dedicated crew to operate the site. Given that a backup site should be in a separate geographic location than th e active site to allow for natural disaster recovery, an on-premises hot site can be quite expensive for an enterprise.

Warm While a warm site may not be immediately accessible for failover, it should be very soon. Given the complexity of the active site, this could take anywhere from a few hours to a whole day. Warm sites are less expensive than hot sites and, in many cases, have a fair delay before becoming available. Because a warm site is not always operational, the replication procedure may result in some data or application configuration being lost. Furthermore, because a warm site isn't typically available for testing, it may not be as reliable as a failover as a hot site. A heated on-premises site for an enterprise could be housed in the data center of a branch office. A branch office is ideal because it should have IT personnel on hand to manage the remainder of the data center's requirements. Warm cloud standby sites are often shared with other customers.

Cold A cold standby site is one that must be provisioned before it can be used. Although the hardware should be present, backups must be used to restore services. This type of backup site not only has the lowest cost, but also the longest downtime. Cold on-premises standby sites are uncommon since they necessitate dedicated hardware that must be always ready. This hardware would be dormant, waiting for a tragedy to strike. A cloud vendor is more likely to provide a disaster recovery team to assist a company in regaining its site. Site mirroring encompasses various redundancy strategies, which we will delve into in this section. It involves the practice of continuously synchronizing the backup site, ensuring it remains up-to-date and fully prepared to take over the workload in the event of a primary data center failure. Site mirroring effectively creates an exact replica of the original site, complete with its data and applications, operating in standby mode at a remote location. By implementing a mirroring strategy, you can significantly enhance your readiness to withstand an outage event, minimizing or even eliminating disruptions to your operations. Cloud operations can be deployed using a hot site model, wherein two fully redundant cloud data centers operate in synchronization. The standby site continuously mirrors the primary site in real-time, ready to assume operations instantly in the event of a failure. The hot site model offers the highest level of redundancy, ensuring minimal downtime. However, it is also the costliest option and is typically chosen when uninterrupted cloud computing operations are an absolute necessity.

© Secbay Press

260 | Page

Secbay Press In contrast, the warm site approach to recovering from a primary data center outage involves keeping the remote backup site mostly offline, except for critical data storage, typically a database. The warm site maintains an operational database server that remains synchronized with the database server at the primary data center. This approach is sometimes referred to as the "candlelight design." Other essential site infrastructure components, such as servers, storage, load balancers, and networking, remain either offline or unprovisioned until the warm site is activated following a failure at the primary data center. During the transition process, a delay is expected, which can impact ongoing operations. The warm site model is like the hot site, except there will be more work involved in the cutover process and usually much more time will be needed before it becomes operational and takes over processing from the failed primary site. There is also a lot of planning and testing that needs to take place ahead of time to ensure that you can quickly bring the warm site online. The warm site approach is more cost effective than the hot site solution as you are not being charged for services that are not active. The time to restore objectives is much less than a cold site but, of course, higher than a hot site. However, the hot site approach will not lose data as compared to the warm site since it is designed for a short RTO. With the warm site solution, transactions will not take place during the time the primary data center goes offline, and the warm site comes up. For many cloud customers, the warm site is a good compromise between the expense of supporting a hot site and the extended downtime incurred with the cold site approach. In the cold site model, a backup data center is designated to take over operations in the event of a primary data center failure, but the servers and infrastructure remain dormant until they are needed. Typically, a cold site facility is not immediately prepared to assume operations from the primary data center, often requiring a significant amount of installation and preparation time before becoming operational. This extended setup period means that it may take a considerable amount of time for a cold site to become fully operational. Despite the delayed activation, the cold site approach is well-suited for prolonged outages and offers a more cost-effective alternative compared to the hot or warm site models. In the cold site recovery model, servers and infrastructure must be installed and configured at the cold site when a failure occurs at the primary data center. To expedite this process, cloud service providers often employ automation tools within their network infrastructure, facilitating the replication of the primary site's architecture, including servers, load balancers, storage, networking, and security components, at the cold site location.

4.6.7 Requirements RPO The recovery point aim is intended to establish what degree of data loss is acceptable in the event of a data disaster. It is nearly difficult to assure that no data is lost when a data disaster occurs, but it is critical to determine how much data can be lost without having a negative impact on business operations. RPO is defined in terms of time rather than data volume. For example, your organization may consider a 12–24-hour RPO for data loss in a noncritical file system acceptable, but only a 1–4-hour RPO for data stored in a specific database table.

RTO Assume the data on an instance has gotten corrupted, and you have been alerted that you must recover © Secbay Press

261 | Page

Secbay Press this data using backup data. How long should the grieving process take from the time you become aware of the loss? This is how the recovery time objective is defined. After the recovery is complete, you must document how the recovery process went, including whether or not the process took longer than the RTO number. This information is especially relevant for any business-critical data or data that is subject to regulatory compliance oversight.

SLA If you choose a cloud vendor to manage your backups, you should be aware of the backup service-level agreement. The SLA should include provisions for the uptime % of backup storage, which can vary depending on whether geo redundant storage is employed. Consider the following SLA for Azure Backup as an example of a cloud storage service SLA Third-Party Offerings It should be noted that there are companies that offer consulting and management services aimed at assisting companies with their DR requirements. Management consulting companies have business continuity practices that assist companies with the process of developing and implementing a DR plan. This approach takes all legal, regulatory, industry, technology, and operational needs into consideration and develops a complete business continuity plan for your company. With the hype of the cloud and every technology company now “cloud enabled,” it should come as no surprise that there is such a thing as DRaaS—that’s right, Disaster Recovery as a Service! With due respect, these companies have a large knowledge base and specialize in DR. They have models developed for the most common DR implementations and have the knowledge to architect custom design requirements. Also, in the event of a disaster and when the need comes to implement your plan, their experience and assistance may be invaluable. With the DRaaS operations being at a different location from the disaster site, they are often in a better situation to perform the failover, testing, and restoration services than your company may be. DRaaS may store fresh user data at the DR site and offer short RPO/RTO timeframes. These companies offer a high-value service to small and medium businesses that may not have the resources internally to develop and manage their own DR plan. Using DRaaS providers saves an organization from investing capital and resources to maintain and operate off-site DR systems. One downside, or potential for concern, is that you really are placing a lot of trust in the DRaaS Company. Potentially, the survivability and validity of your company’s ongoing operations after disaster strikes rests with the DRaaS provider’s capabilities and ability to execute. Can the DRaaS Company meet the RPO/RTO objectives? Will the backup site function as planned? All of these issues require a high level of trust to be placed in your DRaaS and management consulting business partners. Corporate Guidelines The role of information technology in modern corporations has evolved into a critical component of daily operations. For many companies, the functioning of IT services is indispensable, and operational disruptions are highly detrimental. As with any critical corporate operation, a business continuity and disaster recovery plan need to be © Secbay Press

262 | Page

Secbay Press developed and approved by the company. This plan should outline all the operational and governance plans that make up a complete disaster recovery plan. This is much more than a technology document as it has a direct impact on a company’s ongoing operations and possibly even its survival. The executive committee and even the company’s board will need to decide the RPO and RTO objectives and build a plan around that. These metrics will drive the decisions on the various disaster recovery approaches that can be implemented.

4.6.8 Documentation It is not the time to try to find out what your business needs to do to recover when it is amid a disaster recovery. Well-thought-out strategies should be implemented, and every stage of the recovery process should be well documented. This section will go through some of the most important aspects of disaster recovery paperwork.

DR kit A disaster recovery (DR) kit contains all the resources and tools required to carry out all of your disaster recovery plan's tasks. The contents of the package will vary depending on your infrastructure, but it may include important user identities and passwords, contract information for key employees, and software licenses.

Playbook A playbook in disaster recovery outlines the step-by-step actions that must be completed to carry out the disaster recovery procedure. The playbook should be quite specific and should specify who oversees which chores. Many of the components should be included in the playbook. This includes establishing the RTOs, strategies for monitoring the recovery process, and understanding how to use the DR kit.

Network diagram For dealing with complicated network structures, such as those found in the cloud, having a network diagram on hand might be important when recovering all or sections of a network structure. A network diagram shows where all of the network's components, including network configuration settings, are located (IP address, subnet, and so on).

4.6.9 Geographical datacenter requirements Choosing a data center is a process with numerous needs, but the location of the data center is frequently the most important. The following are some of the most essential criteria for choosing a data center's geographic location: • • • • • • •

How close is the data center to the company's headquarters and customers? What is the local weather like? How stable is the location geographically? Is the area prone to severe weather? What is the political climate like in the area where the data center is located? What local IT talent is there? How stable is the area's infrastructure, notably power and network access?

© Secbay Press

263 | Page

Secbay Press

Practice Questions QUESTION NO: 1 To determine when a cloud resource is available or not, monitoring tools can be used to _____ check for this availability. A. only actively B. passively C. actively or passively D. only indirectly QUESTION NO: 2 A ____ is used to specify when an alert should be issued because a resource attribute has reached its maximum or minimum value. A. baseline B. tag C. log scrubber D. threshold QUESTION NO: 3 The ____ protocol enables devices to send log messages to a server (as well as operating systems such as Linux). A. Syslog B. LDAP C. TLS D. SSNP QUESTION NO: 4 A ____ is employing a tool to locate and remove sensitive information. A. threshold B. tag C. baseline D. log scrubber

© Secbay Press

264 | Page

Secbay Press QUESTION NO: 5 A ____ is a feature that applies metadata to cloud resources. A. baseline B. threshold C. tag D. log scrubber QUESTION NO: 6 In terms of logging, ____ is determining what is going on in your cloud environment by looking at logs over time. A. routing B. gathering C. trending D. timing QUESTION NO: 7 A ____ environment is where you write or test new software code. A. QA B. production C. DR D. development QUESTION NO: 8 A ____ environment is one in which customers use the software. A. development B. DR C. QA D. production QUESTION NO: 9 If you use a SaaS (Software as a Service) application, the ____ or ____ should be fully responsible for patching the application. A. cloud vendor B. the application developer C. the customer D. you

© Secbay Press

265 | Page

Secbay Press QUESTION NO: 10 In lifecycle management, ____ provide a timeline for the product's implementation from start to finish. A. plans B. roadmaps C. timeline events D. cycles QUESTION NO: 11 Customers rarely have control over the ____ that is used to provide control over device hardware in a public cloud. A. processor B. firmware C. SAAS D. application QUESTION NO: 12 Which of the following is typically the responsibility of the cloud vendor for patching? A. Virtual machine B. Physical network components C. Operating systems D. Hypervisors QUESTION NO: 13 The process of providing frequent software updates is known as ____ upgrades (also known as continuous delivery). A. Rolling B. CD C. Active-passive D. Flowing

© Secbay Press

266 | Page

Secbay Press QUESTION NO: 14 Assume an IT department wants the benefits of a single billing account that includes several department cloud accounts, but corporate policy prohibits chargebacks. In this case, the IT department will most likely use a _____, which provides visibility of resource utilization and costs to each department but does not result in a charge to the department. A. showback B. feedback C. kickback D. lookback QUESTION NO: 15 The term ____ refers to the software's most recent stable release. A. B-1 B. N-1 C. A-1 D. H-1 QUESTION NO: 16 A virtual appliance is a type of _____. A. storage device B. hypervisor C. virtual machine image D. web server QUESTION NO: 17 SDN separates network forwarding operations from network ____ operations. A. control B. management C. processing D. monitoring

© Secbay Press

267 | Page

Secbay Press QUESTION NO: 18 ____ is a numeric value that represents the amount of data that can be transferred across a network in a given amount of time. A. Latency B. Jitter C. Bandwidth D. Throughput QUESTION NO: 19 Network ____ is the amount of time it takes for a network packet to travel from sender to receiver. A. route B. latency C. time D. speed QUESTION NO: 20 What are the two most common IAC approaches? A. Programmatic B. Declarative C. Imperative D. Procedural QUESTION NO: 21 Continuous ____ refers to the process of releasing new releases as they become available on the main branch. A. integration B. deployment C. development D. enhancements QUESTION NO: 22 Which of the following is NOT a secure scripting feature? A. Use of individual service accounts B. All of these answers are correct C. Remote lockouts D. Key-based authentication

© Secbay Press

268 | Page

Secbay Press

QUESTION NO: 23 Which of the following is NOT an automation activity? A. Updates B. Shutdowns C. Scaling D. All of these answers are correct QUESTION NO: 24 ____ sequencing refers to the process of deploying instances that are related to one another in a specific order. A. version control B. configuration management C. Infrastructure as Code D. orchestration QUESTION NO: 25 IaC (Infrastructure as Code) is a process that allows you to deploy and manage systems such as virtual machines and containers using ____ or _________. A. positional parameters B. configuration files C. scripts D. directives

© Secbay Press

269 | Page

Secbay Press

Answers to Practice Questions 1. Answer: C Explanation: Monitoring tools can be used to actively or passively check for the availability of a cloud resource to determine whether it is available or not. The results should be compared to the minimum uptimes specified in the cloud vendor's SLA.

2. Answer: D Explanation: A baseline is an established "norm" in monitoring. A threshold is defined to determine when an alert should be issued because a resource attribute has reached its maximum or minimum value. A tag is a feature that applies metadata to cloud resources. A log scrubber is a tool for searching for and removing sensitive information from logs.

3. Answer: A Explanation: The Syslog protocol enables devices to send log messages to a Syslog server (as well as operating systems such as Linux) (AKA, Syslog collector). Queries on the Syslog server can be used to identify problems on a system, but the server can also be configured to send alerts to end users. The remaining answers are all valid protocols, but they have nothing to do with logging messages.

4. Answer: D Explanation: A log scrubber is a tool for searching for and removing sensitive information from logs. A baseline is an established "norm" in monitoring. A threshold is defined to determine when an alert should be issued because a resource attribute has reached its maximum or minimum value. A tag is a feature that applies metadata to cloud resources.

5. Answer: C Explanation: A tag is a feature that applies metadata to cloud resources. A baseline is an established "norm" in monitoring. A threshold is defined to determine when an alert should be issued because a resource attribute has reached its maximum or minimum value. A log scrubber is a tool for searching for and removing sensitive information from logs.

6. Answer: C Explanation: Trending in logging refers to determining what is happening in your cloud environment

© Secbay Press

270 | Page

Secbay Press by examining logs over time. For example, if you are concerned that you will run out of space in a cloud storage resource, you can examine daily reports of space utilization over a period of time (such as the previous 30 days) to see if you need to upgrade to a larger storage device. The other answers are not appropriate for this topic.

7. Answer: D Explanation: A development environment is where you write or test new software code. A QA (quality assurance) environment should be as close to your production environment as possible. This environment is used to test upgrade procedures before they are deployed in a production environment. A production environment is one in which the software is used by customers. A disaster recovery (DR) environment is one that is extremely close to the production environment. If the production environment is unavailable for any reason, the DR environment should be able to assume the production environment's role.

8. Answer: D Explanation: A production environment is one in which customers use the software. A QA (quality assurance) environment should be as close to your production environment as possible. A development environment is where you write or test new software code. This environment is used to test upgrade procedures before they are deployed in a production environment. A disaster recovery (DR) environment is one that is extremely close to the production environment. If the production environment is unavailable for any reason, the DR environment should be able to assume the production environment's role.

9. Answer: A,B Explanation: The scope of patching applications will be determined by the application's origin as well as the responsibility level associated with the application. For example, if you're using a SAAS (Software as a Service) application, the cloud vendor or application developer should be solely responsible for patching it. It is critical to review the SLA and other contracts to determine which organization is in charge of patching.

10. Answer: B Explanation: Roadmaps in lifecycle management provide a timeline for the product's implementation from start to finish. Roadmaps are also used to align products with business goals and to provide an easy way to visualize project lifecycle management. The other terms have nothing to do with the topic.

© Secbay Press

271 | Page

Secbay Press 11. Answer: B Explanation: Firmware is software that is designed to control the hardware of a device. Customers rarely have any control over the firmware used by physical hardware in a public cloud (exceptions can include when a customer leases the entire physical system for their use). As a result, patching firmware in a public cloud is almost always completely within the cloud vendor's scope. The other options do not provide control over device hardware.

12. Answer: B,D Explanation: Cloud vendors are responsible for patching hypervisors and network components because they maintain them. Patching virtual machines and operating systems is typically the responsibility of the customer or software vendor.

13. Answer: A Explanation: The process of providing frequent software updates is known as rolling upgrades (also known as continuous delivery). An active-passive upgrade deploys the upgrade to the active environment while leaving the passive environment unchanged. The other answers are not appropriate for this topic.

14. Answer: A Explanation: Assume an IT department wants the benefits of a single billing account that includes several department cloud accounts, but corporate policy prohibits chargebacks. In this case, the IT department will most likely use a showback, which provides visibility of resource utilization and costs to each department but does not result in an actual charge to the department. The other answers are not appropriate for this topic.

15. Answer: B Explanation: N refers to the most recent stable release of software, while N-1 refers to the previous most recent stable release of software.

16. Answer: C Explanation: A virtual machine image is a type of virtual appliance. A virtual appliance differs from a standard virtual machine in that the virtual appliance has been preconfigured to perform a specific task (or set of tasks).

© Secbay Press

272 | Page

Secbay Press 17. Answer: B Explanation: The network management operations are decoupled from the network forwarding operations by SDN. The other answers are terms that have nothing to do with the topic.

18. Answer: C Explanation: Bandwidth is a numerical value that represents the amount of data that can be transferred across a network in a given amount of time. This value is typically expressed in bits per second (or megabytes per second or gigabits per second). While the other terms are related to network data transfer, they do not represent how much data can be transferred across a network in a given amount of time.

19. Answer: B Explanation: Network latency is the amount of time it takes for a network packet to travel from sender to receiver. Large latency values have a significant impact on service performance as well as user experience. The other answers are not appropriate for this topic.

20. Answer: B,C Explanation: There are two main approaches to IaC: declarative and imperative. The other answers are not appropriate for this topic.

21. Answer: B Explanation: When the continuous integration process is used, developers merge changes in their branch as frequently as possible (not exactly continuously, but that is the goal), and the changes are validated using some automated testing process. As a result, the main release branch is constantly validated and updated. Continuous deployment is an extension of continuous integration that allows new releases to be deployed as they become available on the main branch (after testing has occurred). The other answers are not appropriate for this topic.

22. Answer: A,D Explanation: Individual service accounts, key-based authentication, no hardcoded passwords, and password vaults are all features of secured scripting.

© Secbay Press

273 | Page

Secbay Press 23. Answer: D Explanation: Updates, scaling, shutdowns, restarts, creating internal APIs, and routine operations are all examples of automation activities.

24. Answer: D Explanation: Orchestration sequencing refers to the process of deploying instances that are related to one another in a specific order. You can use orchestration to deploy the database server first, then gather information about it (such as its IP address), and then use that information to deploy the virtual machine so that the VM sends data to the database. Orchestration sequencing can be part of an Infrastructure as Code or configuration management system, but these systems do not always include it. Version control is a tool for software development.

25. Answer: B,C Explanation: IaC (Infrastructure as Code) is a process that allows you to deploy and manage systems such as virtual machines and containers using configuration files or scripts. The other answers are not appropriate for this topic.

© Secbay Press

274 | Page

Secbay Press

DOMAIN - 5 Troubleshooting 5.1 Given a scenario, use the troubleshooting methodology to resolve cloud-related issues. Prior to implementing changes, it is essential to always consider corporate policies, procedures, and potential impacts. It may appear to be common sense to consider organizational policies, procedures, and repercussions before introducing changes. However, when you're troubleshooting a problem and other people are rushing you for an answer, this basic advice is frequently overlooked. Unfortunately, failing to heed this advice can result in other consequences, including the loss of your employment. You may be asking what kind of scenario would arise from failing to examine organizational policies, procedures, and consequences. Consider the following scenario: a user tries to access an S3 bucket in your company's AWS infrastructure. Looking at the bucket's permissions, you can see that the user should be able to see the contents of this bucket. The user says that he is unable to access the contents and requests that the permissions be changed to include the write permission. He claims he requires quick access and believes that allowing write access will also allow him to read (view) the contents of the S3 bucket. Your company has a policy that requires a user to fill out a request form to get write access on an S3 bucket. When you communicate this guideline to the user, who you later discover is a senior manager, he responds, "We don't have time for that." Simply provide me with write access right now." As you can see, you're in a tough spot right now. You want to aid the user, yet the action he requests is against business policy. The policy exists for a purpose: to prohibit unauthorized access to company data stored in S3 buckets. Although you may be tempted to give in to the user's wishes, it is essential to adhere strictly to corporate policy. You may be able to expedite the user's request, but adhering to the rules is critical to avoiding more issues. While some people regard troubleshooting as a mystical art form, it is based on very clear actions that you should take. Here's a rundown of the steps:

Identify the problem. •

Question the user and identify any changes made and perform backups



Examine recent environmental or infrastructure changes



Duplicate the problem if possible



Identify the Symptoms



Define Individual Problems Clearly

Always consider corporate policies, procedures, and impacts before implementing changes. © Secbay Press

275 | Page

Secbay Press The initial phase of network troubleshooting entails problem identification, during which you should undertake the following actions:: •

Gather information: Assess the existing network status by utilizing the network troubleshooting tools at your disposal.



Duplicate the problem: If feasible, conduct testing on a trial piece of hardware or software. This can assist in validating the source of your issu



Question users: on the network to learn about the errors or difficulties they have encountered.



Recognize the indications of the network outage. For instance, are they characterized by a complete loss of network connectivity, sluggish network performance, or are these problems isolated to a single user?



Investigate any alterations within the network preceding the onset of issues. Has new hardware been introduced? Are there new network users? Have there been any software updates or modifications in the network?



Precisely define each problem separately. Networks can sometimes exhibit multiple issues simultaneously. This is the moment to pinpoint each distinct problem, ensuring that your solutions aren't hindered by unresolved issues.



Develop a hypothesis regarding the likely cause (consider the obvious). If necessary, engage in external or internal research based on observed symptoms.



Establish a theory of probable cause



Remain on the lookout for other issues

If required, perform additional research, either externally or internally, based on the observed symptoms. After collecting all available information regarding the network problem(s), it's time to formulate a working hypothesis. While crafting your theory about the underlying causes of the network issue, be open to questioning the apparent reasons while also remaining vigilant for potentially more significant issues. Network outages can sometimes result from straightforward problems, such as a cable being accidentally disconnected. However, in other instances, the issues may be rooted in more complex causes, such as a breach in network security..

Validate the theory to ascertain the root cause. Once the theory is substantiated, Determine the next steps to resolve the problem. If the theory is not confirmed, re-establish a new theory or escalate. Using the tools at your disposal, it’s time to test your theory. If your theory is that the network router is defective, try replacing it with another router to see if that fixes the issue. At this stage, it’s important to remember that proving your own theories wrong doesn’t mean that you’ve failed. Instead, it means that it’s time to return to step two, develop a new theory, and then find a way to test that one. Sometimes your © Secbay Press

276 | Page

Secbay Press first theory may be right, but it’s also common to go through several theories before arriving at the true cause of your network’s issues.

Establish a plan of action to resolve the problem and implement the solution. Once you’ve confirmed your theory about the causes of the network issues, you’re able to solve them. Come up with a plan of action to address the problem. Sometimes your plan will include just one step. For example, restart the router. In other cases, your plan will be more complex and take longer, such as when you need to order a new part or roll a piece of software back to a previous version on multiple users’ computers. Implement the Solution Now that you have a plan for fixing the network, it’s time to implement it. There are some solutions th at you may be able to do by yourself, while others may require cooperation from other network administrators or users.

Verify full system functionality and, if applicable, implement preventive measures • • • • •

Verify full system functionality Test the network, service, website, etc. Be aware of 2nd and third order effects Implement preventive measures Talk to the individual that reported the issue

Once you’ve implemented your solution, be sure to test the network. Make sure that the issue in question has been resolved, but also be on the lookout for other issues that may have arisen from the changes that you made to the network. As part of your verification process, make sure to consult both the network tools at your disposal as well as individual user accounts of their experiences on the network.

Document the findings, actions, and outcomes throughout the process • • •

Document Findings so in the event you run into the issue again, you’ll be prepared Document the steps and symptoms Document theories

Document the findings, actions, and outcomes throughout the process. If you are a network professional or someone who frequently deals with networks, it's reasonable to assume that you may encounter this specific problem again in the future. Make sure to document each stage of troubleshooting the problem, including the symptoms that appeared on the network, the theory you developed, your strategy for testing the theory and the solution that you came up with to solve the issue. Even if you don’t reference this documentation, it may be helpful to another network engineer at your company in the future and could help to shorten network downtime. Network Troubleshooting Tools Apart from user feedback and personal experience with the network, various tools are at your disposal for diagnosing and resolving network problems. These tools may be integrated into the computer's operating system, exist as independent software applications, or take the form of hardware devices designed for network troubleshooting.

© Secbay Press

277 | Page

Secbay Press

5.2 Given a scenario, troubleshoot security issues. 5.2.1 Privilege In most circumstances, this access is not offered to a typical user account but is reserved for people who require additional administrative authority over a resource. Managing privileged access presents several issues. Most firms adhere to a Least Privilege policy, in which user accounts are granted just the access that they require. The purpose is to prevent users from having access to privileged activities that they are not obliged to have; however, this might lead to issues if a user has insufficient access.

Missing An account or job may be devoid of privileged credentials entirely. Examine access logs, group memberships, and account permissions using the Identity and Access Management (IAM) or logging dashboard to see whether this is the case.

Incomplete There might be multiple processes or components involved in granting privileged access to a resource. Each scenario is unique in this situation, and there is no single troubleshooting procedure that will address the problem. Examine access logs, group memberships, and account permissions, just as you would for missing privileged credentials. Consult the documentation on how to set up privileged access to see if any steps were missed.

Escalation The process through which an ordinary user account gets privileged access is known as privilege escalation. The mechanism utilized to conduct the escalation might differ greatly. A user operating on a Linux virtual machine, for example, may use the su or sudo command to elevate his or her account to get privileged access. Alternatively, if the user works with cloudbased accounts, security or IAM policies might be employed to escalate the user account. There is no one troubleshooting methodology for determining the source of issues since there are so many ways to escalate user account rights. To begin the troubleshooting procedure, consult log files and evaluate the instructions for the escalation process.

Keys Keys, rather than the typical username/password combination, may be used to get access to cloud resources. Operating systems are designed to support multiple users and support multiple accounts. It is a critical security issue to grant users access privileges to the level required for them to perform their duties but not in addition to what they need. When troubleshooting privilege escalation issues, check for up-to-date antivirus software, that the applications are running with the least privileges required for them to operate, and that, if possible, kernel applications are digitally signed. Also make sure you have configured security for mandatory access controls.

© Secbay Press

278 | Page

Secbay Press Account Privilege Escalation Privilege escalation can be defined as a user receiving account privileges that they are not allowed to possess. A storage administrator in your cloud deployment usually would not be granted full control over the firewalls, for example. If they were able to gain firewall configuration attributes, that would be an example of privilege escalation. Privilege escalation can occur because of a bug, a design oversight, or delegating user account management to a person who is willing to change user account settings without gaining necessary approvals. Certificate management can be a complex and wide-ranging topic. In cloud computing, there are many different types of certificates and a variety of services offerings that range from your organization having complete control of the security certificates to mixed offerings to the cloud provider taking responsibility for certificate management. Troubleshooting will be dependent on your key management plan. As usual, review log files, collect and distill the information you have collected, and plan your approach to troubleshooting certificate issues. Oh, and check the expiration date!

5.2.2 Authentication •

Authentication- identify who the user is during the logon process



MFA- Multifactor Authentication



Biometrics



Troubleshooting Logging Files



Expired Passwords



Locked Accounts/Passwords

Authentication is the ability to identify who a user is, usually during the login process. Users and devices can be authenticated using a wide variety of methods. However, the most common is the standard method of a username and password. Other methods include Multi-Factor Authentication (MFA) tokens and biometrics such as fingerprint or iris scanners. When troubleshooting authentication issues, the most valuable source of information will be in the logging files. When authentication fails, a log file will be generated and information included on the reason for the failure. Accounts can be disabled and passwords expired. Also, if the user fails after the defined number of password attempts, most systems will lock the user out, and they will not be allowed to log in.

5.2.3 Authorization •

Authorization- grants access to receives



Fewest resources necessary



Sysadmin must grant access by assigning authorizations



Troubleshooting- check user account configuration if they have correct groups/permissions

© Secbay Press

279 | Page

Secbay Press •

Logging

Once a user is authenticated, or logged into a system, they are granted or authorized rights to resources on that system. It is common that when a user is created, they are authorized to access few, if any, services. The systems administrator must grant access to each user by explicitly assigning authorizations to the account or placing that user into a group that has authorization. When troubleshooting authorization issues, it is best to check the user’s account configuration to see whether they are assigned to the required groups or have direct object authorization. Also, logging will generate records for every successful and failed authorization activity

5.2.4 Security groups Network security groups A network security group is intended to defend a network (VPC). These groups serve as virtual firewalls for your cloud network. Many of the same methods are used to troubleshoot a network security group as you would with a normal firewall. Check that the issue is with network access to your cloud-based network and not with the resource itself that you are seeking to access. ● ● ● ● ●

Check to see if access is accessible on your cloud-based network. Check to see whether your network security group's policies have been modified recently. Examine the network security group's log file(s). Examine the network security group's rule(s). While testing new setups, keep an eye on network traffic.

Directory security groups Signing in to remote resources in the cloud has many dependencies. Access control lists and firewalls must be configured to allow you to access a cloud service with its destination IP address and port number. Often, for additional security, the source IP range from your office is added to the security rules, and if you are not connecting from that specific location (range of IP addresses), you will be denied access. Many cloud providers have device-level security groups that block incoming traffic unless it is specifically configured to allow the connection. A security group serves as a virtual firewall for your instance, regulating both incoming and outgoing traffic. When you initiate an instance within a VPC, you have the option to allocate up to five security groups to that instance. It's essential to note that security groups operate at the instance level, rather than the subnet level. Consequently, each instance within a subnet within your VPC can be linked to a distinct collection of security groups. If you launch an instance using the Amazon EC2 API or a command line tool and you don't specify a security group, the instance is automatically assigned to the default security group for the VPC. If you launch an instance using the Amazon EC2 console, you have an option to create a new security group for the instance.

© Secbay Press

280 | Page

Secbay Press Every security group permits the creation of rules that govern both incoming and outgoing traffic to instances. In this section, we will delve into the core concepts of security groups within your VPC and their corresponding rule sets. In certain scenarios, you might opt to configure network ACLs (Access Control Lists) with rules that mirror those present in your security groups. This can serve as an extra layer of security for your VPC. For a more comprehensive understanding of the differences between security groups and network ACLs, consult the relevant documentation.

5.2.5 Keys and certificates Keys are an alternate means of authentication in cloud computing. An authentication key is offered instead of a typical username and password. Keys are often not used when users actively log in to a cloud environment, but they are utilized when making API requests to execute an operation on a resource. One of the issues with information security is the idea of trust. Assume you wish to connect to your bank online and put the bank's URL into a browser. Your browser receives a response from a remote server, but how can you be sure it is your bank and not a server that has hijacked your connection attempt? A digital certificate is the solution to this problem. When a browser connects to a server, the server generates a signature that is associated with a digital certificate. To validate the signature, the web browser contacts a third-party entity known as a certificate authority (CA). By employing the CA for verification, trust is established between the client system and the server. For certificate signing, public CAs are available.

Expired Keys and certificates can both be configured to expire after a certain amount of time. Because providing an expiration date for a certificate is a default option, this problem is considerably more prevalent. An expired key or certificate is no longer valid, necessitating the issuance of a new key or certificate.

Revoked An administrator may revoke a key or certificate if he or she considers the access is no longer required or if the key or certificate has been compromised. A revoked key or certificate is no longer valid, necessitating the issuance of a new key or certificate.

Trust Trust is a certificate problem rather than a key issue. The client system must trust the CA to use a certificate. Typically, this is a client-side setup parameter. In a web browser, for example, you may go to your settings and designate which CA servers the web browser will trust.

Compromised A tainted key or certificate might constitute a significant security risk. For example, if a key is compromised, illegal access to resources may occur. When a certificate is hacked, a server © Secbay Press

281 | Page

Secbay Press might be hijacked, and important information stolen. Troubleshooting compromised keys or certificates might be difficult unless the user who permitted the compromise reports it. Audits that check for suspicious user behavior might be useful in the case of compromised keys. User complaints or a lack of traffic to a server might indicate an issue with compromised certificates.

Misconfigured Any resource that is incorrectly configured is likely to cause issues. Resources that rely on keys or certificates will not function properly if the keys or certificates are not properly set up. To troubleshoot this issue, review log files, double-check the configuration instructions to confirm all stages were successfully followed, and always remember to test a configuration once you have finished the setup process. Certificate management can be a complex and wide-ranging topic. In cloud computing, there are many different types of certificates and a variety of services offerings that range from your organization having complete control of the security certificates to mixed offerings to the cloud provider taking responsibility for certificate management. Troubleshooting will be dependent on your key management plan. As usual, review log files, collect and distill the information you have collected, and plan your approach to troubleshooting certificate issues. Oh, and check the expiration date! Security certificates, while not specifically licensing, also expire. This can be a catastrophic event because any service, such as HTTPS, VPNs, storage, and communication encryptions and authentication systems that rely on the certificate, will also fail. Keep track of all certificate expiration dates and renew them prior to expiration!

5.2.6 Misconfigured or misapplied policies •

Misconfigured policies or misapplied policies can cause a wide range of issues



Important to look at change management policies as well



Unauthorized changes

Corporate Policies, Procedures, And The Impact Of Implementing Changes Change management is the process of managing all aspects of ongoing upgrades, repairs, and reconfigurations. Change management involves planning and managing changes to minimize any disruptions of service. Change management outlines policies and procedures and provides a standardized process to follow, including the following: •

Recording the change



Planning for the change



Testing documentation



Getting approvals

© Secbay Press

282 | Page

Secbay Press •

Evaluating



Validating



Compiling instructions for backing out the change



Doing a post-change review if desired

The process of managing technical configuration and customization changes can be detailed in large data center or cloud operations. A change management procedure usually includes the name of the requester, what the change is going to be, and the reason or justification for making the change. Other areas include a description of the expected result of making the change and what risks are involved. You must also outline what resources will be needed and coordinate the activities of the various groups involved in the change. A list of individuals responsible for the various aspects of the change including the design, configuration, deployment, and validation steps must be prepared. There also needs to be an investigation into other changes that are taking place to make sure no conflicts exist between those changes and yours. Also, if one change requires another change to take place before it can be implemented, the change sequences will have to be coordinated.

5.2.7 Data security issues Unencrypted data On the other hand, unencrypted refers to everything that is not encrypted. Unencrypted data is commonly referred to as plaintext since it is viewable by both humans and computers. Unencrypted data or information is often defined as data or information that is stored unprotected, without any encryption. Data that is not encrypted is vulnerable to online breaches. The term "unencrypted" or "plaintext" refers to data that has not yet been encrypted using encryption methods. When you state the data is unencrypted, you indicate that the data – a document, message, email, file, and so on – is in an unsecured form that can be easily viewed or accessed without the need for a decryption key. This means that the data is unencrypted and that practically anybody can access it at any time. Unencrypted emails, for example, are more vulnerable to being hacked at any point along the journey from sender to receiver.

Data breaches If you follow the news, you are certainly aware of the increase in data breaches disclosed by large corporations. Hackers are growing more adept, and they are discovering new ways to gain access to businesses' confidential information. To avoid having to solve data breach concerns, prioritize keeping the data as safe as feasible. Following several policies and procedures, such as segmenting data, limiting access to data, and implementing a proper records management strategy, are all part of securing the data. You should also consider the following procedures to assist troubleshoot a data breach: ● ●

Use tools to assess whether a data breach has happened. Have a plan in place in case of a data breach.

© Secbay Press

283 | Page

Secbay Press ● ●

Do not be afraid to enlist the assistance of a third-party business to assist you in mitigating the data breach. Contact local enforcement immediately because the sooner a crime is reported, the better law enforcement personnel can assist.

Lack of encryption in protocols Protocols that do not use encryption put data at danger of being stolen while in transit. This problem may be avoided by encrypting data before transit with a separate program, although it is usually preferable to avoid utilizing protocols that lack encryption. As a result, in your cloud environment, you should avoid employing the following protocols: ● ● ● ● ●

FTP (File Transfer Protocol) (FTP) Telnet Network File System (NFS) (NFS) Protocol for Simple Network Management (SNMP) HTTP

The "don't use these protocols" guideline may have certain exceptions. For example, modern businesses frequently utilize FTP to share data with anonymous users in read-only mode. And, while some businesses utilize SNMP and NFS in certain instances inside the bounds of a secured network, they should never be used outside the company network.

Insecure ciphers Because the algorithm used to accomplish the encryption is open to hacking efforts, several ciphers are no longer deemed safe. Be aware of the following ciphers, which the encryption industry currently deems to be unsafe, and avoid using them: ● ● ● ●

The DES encryption cipher 3DES is an encryption cipher. RC4 is an encryption cipher. AES encryption cipher with CBC

Data sent across a network unencrypted is susceptible to interception and allows an attacker easy access to your data because all the information will be in clear text. For example, using Telnet for remote command-line access exposes your username and password credentials to anyone with a network sniffer and the ability to connect to your network. If you must conform to regulatory requirements such as HIPPA, SOX, or PCI regulations in the United States or to meet corporate security directives, you will need to replace these older systems with current, more secure applications. Telnet has been replaced with Secure Shell (SSH). FTP has a secure version called SFTP. Web browsers support the unsecure HTTP protocol and the SSL/TLS protocol commonly referred to as HTTPS. Remote network connections use VPNs that are encrypted with IPsec or other encryption architectures. Secure systems can be difficult to troubleshoot as the data is, by definition, encrypted and offers minimal insight on any issues it may be experiencing. Log files can be useful when troubleshooting encrypted systems as they are readable in plain english and, if consolidated, offer insight on both ends of the link. © Secbay Press

284 | Page

Secbay Press Maintain Sufficient Security Controls And Processes The cloud provider will most certainly have a dedicated security department that encompasses not only security from a technology standpoint but also physical security. Security practices are critical to avoid breaches and to keep in front of those who want to damage you or your company’s reputation. As part of an effective and complete cloud operation, your company should develop and document your security posture and controls. This should be part of your regular operations documentation and project updates. The concept of least privilege is important as it only grants security permissions for what is actually required and nothing additional. Division of duties is common where there is no one single person who has end-to-end access responsibility. With the division of duties, a single individual cannot take control because there is a second person who has access to a critical part of the cloud that the other does not possess but needs to complete the role. Security processes and controls are large topics that are rapidly evolving. CompTIA offers separate education tracks on security Network Access Tunneling And Encryption When interconnecting networks, especially over a public network such as the Internet, it is mandatory that you encrypt all traffic from the source to the destination. Encryption can be a complex topic, as you have learned throughout this book. Use the most current encryption types and transforms such as AES-256 to protect your data. Cloud providers commonly offer encrypted VPN connections from your facility to your cloud fleet of services. Use their best practices and follow the guidelines that they publish online. You will need this information when configuring and testing a VPN connection. Sometimes the terminology can be confusing. It is mandatory that you read the manual to make this work!

5.2.8 Exposed endpoints Any device or resource that is intended to be the destination of a network connection is referred to as an endpoint. Many endpoints are built to be accessible to entities outside your network. Your web server and email server, for example, are examples of endpoints that should be linked to the Internet. It is crucial to implement a security strategy for these endpoints, ensuring that only authorized users can access them. However, many endpoints should not be exposed to the outside world. Consider a database resource that holds sensitive corporate data. As a result, it is critical to ensure that these endpoints are protected by a firewall or other comparable security device.

5.2.9 Misconfigured or failed security appliances IPS An intrusion prevention system (IPS) checks for intrusions and can act if one is identified. For © Secbay Press

285 | Page

Secbay Press example, a HIPS may identify a suspect login and, as a result, ban access from the originating IP address to the system. An IPS has an advantage over an IDS in that possible threats can be mitigated faster than if a human is required to intervene. The downside of using an IPS over an IDS is that false positives may result in access being denied to someone who should have access.

IDS An intrusion detection system (IDS) is a software or hardware device that detects whether an incursion is taking place or has taken place on a network or a host. IDS software is classified into two types: network-based and host-based. A network-based intrusion detection system (NIDS) is software that examines network traffic to identify if an incursion is occurring or has occurred. A host-based intrusion detection system (HIDS) is placed on certain systems and monitors the condition of the system to identify if an intrusion is occurring or has occurred. There are several alternatives for these software packages, but the main idea is that a HIDS will use a database that explains what to monitor on the machine. This can involve monitoring activities taken by other software programs, actions taken by users, changes to the operating system's settings, and changes made to file systems.

NAC Consider the following scenario: your company has adopted many software packages to safeguard its network and resources. This comprises antivirus software, threat detection software, device management software, data loss prevention software, and other programs. Each software is meant to fulfill a specific purpose, yet they all operate independently, even though they all contribute to the overall aim of endpoint security. Consider network access control as a means of bringing these disparate applications together under a single administrative umbrella. NAC uses policies to regulate numerous components of a network to increase network security while also making network protection more apparent and transparent to the NAC administrator.

WAF The majority of common firewall software products are meant to safeguard a network or an entire operating system. Although there will be times when you will want to use that form of firewall in your cloud environment, there is a more specialized circumstance where you will want to use a different type of firewall: the web application firewall (WAF). A WAF is intended to defend HTTP/HTTPS-based OSI layer 7 applications. This is significant for cloud settings because apps in a cloud architecture can function independently of the operating system. By analyzing HTTP traffic, a WAF is meant to filter and monitor inbound connections to apps. It protects your apps from SQL injections and cross-site scripting threats (known as XSS).

© Secbay Press

286 | Page

Secbay Press

5.2.10 Unsupported protocols An unsupported protocol is one that your company has decided cannot be utilized. The reason for not implementing it might be for security reasons or because the protocol does not meet the needs of the organization's business. Port scanners, for example, can be used to detect which protocols are being utilized on a cloud resource inside your cloud architecture.

5.2.11 External/internal attacks Many security specialists devote a significant amount of time and effort to preventing external threats and how security professionals seek to avoid these attacks as well as fix difficulties that develop as a result of these attacks. Internal assaults, which are initiated by legitimate users within the company, must also be considered. Inside users must be trusted, yet this trust frequently makes it simpler for an internal user to conduct an assault. The techniques and tools listed below can help you troubleshoot and limit internal attacks: Document clearly which acts are permissible and which are not permitted. ● ● ● ●

Make users aware of their security obligations. Users, Impose stringent security measures, such as strong passwords. Ensure the security of internal resources, especially the network. Use auditing tools to gain a better understanding of user behaviour.

5.3 Given a scenario, troubleshoot deployment issues. 5.3.1 Connectivity issues Cloud service provider (CSP) or Internet service provider (ISP) outages •

CSP or ISP outages



Outages are rare but not uncommon



Some factors that are under your control, some are not



Public clouds can handle device failures with availability zones

Service outages can come from a wide variety of sources, some that are under your control and many that are not. If there is a power or carrier outage, you must rely on them to restore services. Storms or natural disasters can cause outages that may take time to restore. A good disaster recovery plan is essential because you may have to decide to fail over to your backup site if the outage is expected to last for an extended period of time. Outages in the cloud are rare but certainly not uncommon. Public clouds are designed to be highly resilient to individual device failures. With the popular availability zone architecture, should an availability zone fail in a cloud region, the applications can be designed to automatically fail over to the backup

© Secbay Press

287 | Page

Secbay Press availability zone with a minimal loss of service. Cloud providers, as discussed, have extensive reporting systems that publicly disclose network heath and any availability issues in their network that they are currently experiencing. ●

Most CSPs have a publicly accessible webpage that identifies which services are operationally sound and which are not.



If you are unable to connect to one of your cloud resources, make sure you can connect to other resources within the same cloud environment or network. To narrow down the problem, you should identify whether the connectivity issue is tied to a single resource or to all of your resources.



If you can connect to where the resource is specified in the CSP environment, you should ensure that it is appropriately setup.



Examine cloud network configurations to ensure that a network configuration isn't the source of the issue. This contains IP settings as well as security appliance settings such as NAC or WAF.

Figure 35 aws service health dashboard

5.3.2 Performance degradation In networking, performance degradation refers to a decline in network reaction speed. Decreased © Secbay Press

288 | Page

Secbay Press network performance can be caused by a variety of issues, including: ● ● ● ● ● ●

Increased bandwidth utilization Inadequate physical connections Networking devices that aren't operating DNS problems Problems with ISP Concerns about quality of service (QoS)

Network performance deterioration is frequently linked to a problem known as latency.

Latency •

Bandwidth



Latency



From the cloud to the Internet



Public Clouds



Network Starvation

Server machine images allow for a selection of adapter speeds, usually offered in 1Gbps or 10Gbps options. Network I/O is a measurable object whose metrics should be included in your baseline documentation. If network bandwidth exceeds Local Area Network (LAN) adapter capacity, you are left with two options; the first is to reduce that load on the adapter or upgrade to a higher-speed network interface. For applications that require high-speed and low-latency connections, it is best to place all of your VMs on the same hypervisor and use machine images that have 10Gbps Ethernet adapters installed for maximum bandwidth. Bandwidth capacity from the cloud to the Internet can be a potential bottleneck. Public clouds will own this part of the network including management and troubleshooting. If it is a private cloud, you may have greater control over bandwidth capacity and utilization. If so, it will be your responsibility to work with your network carriers to increase the speed of your existing connection or add additional circuits. Network capacity starvation can lead to latency and degraded performance. Applications may time out, and for e-commerce sites, result in high customer dissatisfaction.

5.3.3 Configurations Scripts •

Automation and or Scripts



Troubleshooting Automation issues



Is the issue on the backend virtualization service



Monitoring dashboards

© Secbay Press

289 | Page

Secbay Press •

Log files generated



API calls

What could possibly go wrong with a fully automated system where just the click of an icon unleashes a host of cloud automation scripts? Well, many things can go wrong! Not only that, automation often hides the complexity from the user and does not always report results to the person using these automated services. The ability to troubleshoot automation-related problems comes down to whether you control the process or another party such as the cloud provider does. If the automated systems are part of the backend virtualization services embedded in the cloud provider’s operations, then it is beyond your reach to troubleshoot the actual issue. However, when using these services, it is critical that you validate the end result is what you expect. Look at your original deployment plan and what you actually configured in the deployment and compare that to what you see as running services. If there is a discrepancy, go back and make sure that you requested what was actually deployed. Is there a difference between the input and the output? If the input, or requested, configuration is what you expect but the deployed services are different, it could very well be that the automation software did not work as expected. Log files can be collected on all API calls to a device or service; these can offer valuable information on the success or failure of the scripts used. A well-written script can also validate the changes and roll back if there was a script failure. Monitoring dashboards can also be configured to monitor the results of the automated systems in a graphical manner. Since it is most likely that you do not have access to the underlying automation code, your only real option here is to open a trouble ticket with the cloud provider for investigation. Also, the frequently asked questions (FAQ) or support discussion forums may have additional information on the issue you are troubleshooting. If it was your organization that developed the automation applications, then you will have much greater flexibility in troubleshooting. Are there log files that were generated? Did the script fail before completion? Did an API call fail? By investigating the software and stepping through its execution, you, or your development team, should be able to identify the issue internally to your organization.

5.3.4 Applications in containers •

Container Compromised



Risk Management



Some applications aren’t suited for containers



VMs may be more appropriate



Consider the Architecture

Using containers in the cloud makes portability simple and easy. At least, that's the story being told to the cloud application development masses who want to quickly adopt containers. But there's another side to this story. It takes some knowledge of the processes and enabling technology to make cloud-to-cloud migration using containers a lower-effort and lower-risk affair. Also, the required skills and best practices aren't as pervasive as they should be. Enterprises are struggling to catch up. © Secbay Press

290 | Page

Secbay Press Here's how to use containers with heterogeneous cloud migration in mind, including how to define the proper processes, the right tools, and what you can expect in terms of real-world results. The good ...and the bad Docker established an early precedent in container software, with around a dozen competitors like Warden, LXD, and CoreOS, as noted by research firm IDC. Numerous container providers are enhancing their own iterations of container orchestration software, which is the underlying layer of programming that facilitates the seamless integration of containers into functional business solutions. Kubernetes currently leads the way in the realm of container orchestration software. We don't want to get too far in the weeds here, since so many other sources are available that describe the inner workings of containers. In short, containers break up apps into smaller packages of code, each bundled with all the basic system software that apps need to operate independently of whichever server plays host. The key advantage of containers is their cloud portability. This means that developers usually won't need to rewrite code for each different operating system and cloud platform. Moreover, applications are shifting their focus from specific, like a desktop PC, to more expansive environments, such as a cloud capable of serving millions of users across diverse mobile and stationary devices. Containers enable these applications to scale effectively while establishing a straightforward path between the source and target platforms. Transferring containers from one cloud provider to another is as straightforward as downloading them onto the new servers. Sound good? You bet. But, with all the upsides, there are a few downsides to consider as well. It's time to put aside the novelty of containers for now, and focus on the limitations and impediments related to using containers in public and private clouds. Container security boundaries introduce risks Containers have security-related limitations when it comes to security, and you need to understand those limitations before you build applications that use them. For the most part, security admins inside of enterprises don't yet "get" containers, and they need to understand them more fully before you make use of this technology. Containers in the cloud lack the same security boundaries as virtual machines (VMs). If hackers identify vulnerabilities in the underlying operating system, they can exploit these weaknesses to gain access to the containers. Conversely, they can exploit a container to access the underlying server. In either case, the container becomes compromised, whether it's moved from on-premises to the cloud or between different cloud environments. Another concern with containers lies in the container creation process. When an organization creates its own containers, their security is dependent on the organization's existing security measures. If the staff lacks expertise in container security, the containers they create may not meet the security standards required for migrating container-based applications to public or private cloud environments. Additionally, when organizations use containers from public repositories, they must trust that these containers were originally created with security in mind. To mitigate this risk, it's advisable to select containers from public repositories that are signed by the Docker Content Trust system, ensuring the download of legitimate containers. From a security perspective, containers may exhibit weaknesses as they share an OS kernel and © Secbay Press

291 | Page

Secbay Press components. Moreover, they typically require deep-level authorization, often requiring root access in Linux environments, to run. Consequently, security breaches have the potential to impact the underlying OS and potentially affect other containers, as attacks can propagate across this shared environment. Although major breaches involving containers have not yet been widely reported, the evolving sophistication of hackers may lead to an increase in such attacks. Many organizations are not adequately prepared for this emerging threat landscape. Therefore, while containers offer the advantage of cloud portability, their adoption may inadvertently expose containerized applications to increased vulnerabilities in the pursuit of cloud flexibility. Some applications aren't suited to containers Containers may not be suitable for all applications, and their adoption could potentially introduce unnecessary complexity to simple applications. While the allure of building and deploying containerbased applications is undeniable, it's essential to assess whether the use of containers aligns with the specific requirements of your applications. When considering the use of containers in the cloud, it's crucial to evaluate the types of applications you intend to create and whether containerization is necessary to achieve portability. Containers excel in scenarios involving microservices-based application development and deployment, as they enable the seamless linking of containers to create a cohesive application. The primary objective here is to achieve scalability through distribution, launching container instances only for the components of the application requiring increased processing capacity. The issue, however, is that all applications won't benefit from containerization. Certain cloud applications may be best suited to a monolithic design. For example, if they don't need to scale, then breaking them up into a series of containers will only make the application more complex, and could even hinder portability, considering the number of moving parts that have to be bundled together. In these cases, containers place limitations on the workloads, and they may even place limitations on the ability to port the application from cloud to cloud or on-premise to cloud. Organizations are still playing with different uses for containers. As businesses progress through more projects, best practices will continue to emerge. Sometimes you need VMs, not containers When considering portability, you should consider containers, but also think about the tradeoffs beween using containers versus VMs. VMs are self-contained. Each VM includes a unique operating system (OS), drivers, and application components. You can also migrate VMs to any other system, as long as a compatible hypervisor is available. Containers are a different animal. These applications operate on the physical OS, utilizing a significant portion of the underlying OS kernel, along with numerous libraries and binaries. Thus, they're a bit more coupled to the OS, considering the number of dependencies that most container developers will build into the containerized applications. These dependencies can limit portability between clouds. For instance, Linux containers under Docker can't run on current versions of Windows Server.

© Secbay Press

292 | Page

Secbay Press It's easy to create a container mess in the cloud Moving containers to the cloud can introduce a significant challenge: the potential for resource mismanagement. Containers offer the advantage of rapid deployment, but this speed can also lead to excessive resource consumption if not carefully monitored. It's essential to ensure that unused containers are shut down systematically. However, developers may sometimes neglect to adjust the container count after launching container-based applications, leaving resources allocated unnecessarily until such adjustments are made. In the public cloud space, this could cost you as much as $50,000 per month for cloud-based resources that deliver no benefit to the business. Paving a path forward The path to cloud portability will be paved by containers—that much is clear. However, containers should not be seen as a one-size-fits-all solution for achieving cloud portability. Their effectiveness can be limited by factors like security, application compatibility, and deployment requirements. Fortunately, major public and private cloud providers are increasingly container-friendly, offering more reasons for developers to adopt containers for portable application development in the cloud. In addition to the previously mentioned tips, it's crucial to follow these best practices before venturing into new application development or migrating existing applications to containers: 1. Extensive Testing: To determine the suitability of containers for your specific needs, conduct thorough testing. This should involve creating test applications that replicate real-world usage scenarios, including containerization of actual workloads. Gathering knowledge through testing is essential before transitioning to production. 2. Carefully Plan the Architecture: Incorporating containers into your cloud infrastructure requires meticulous planning and architecture considerations. Efficiently managing dependencies, scaling resources, and ensuring robust replication will yield significant benefits when deploying your application. This involves resource management, optimizing application performance, ensuring reliability, and enhancing portability. 3. Understand Your Boundaries: Experiment with a few applications to gauge the challenges of building or adapting applications for containers and comprehending the practical aspects of portability. Containers, like any technology, come with their own set of advantages and drawbacks. The question isn't whether or not containers are in your future; it's clear that they are. The question is how to best make use of the technology. Most people use containers for portability and to protect themselves from cloud computing vendor lock-in. As long as you understand that nothing's perfect, and that there's a lot to learn, you'll do just fine.

5.3.5 Misconfigured templates •

Templates are software representations of networked systems



Deploy complete cloud systems at a single time



There are tools like Insight VM to look at configuration settings of your VM and compare them to industry best practices

© Secbay Press

293 | Page

Secbay Press •

Benchmarks for Cloud VMs

Templates are software representations of network systems. By using these templates, you can deploy complete cloud systems at a single time. This allows you to implement “one-click provisioning” instead of deploying and configuring individual cloud objects. Templates can be helpful with automated deployments of virtualized systems. The template may use API calls to cloud services that have changed over time. It is a best practice to validate that your templates are current and operational so as to prevent them from failing when they are most needed. How to discover misconfigurations The Cloud Configuration Assessment feature in InsightVM is designed to collect and analyze the configuration settings of your AWS accounts. It then compares these configurations against industry best practices, offering insights into potential vulnerabilities that may impact your cloud environment's security. This feature is an integral part of InsightVM and seamlessly integrates with its comprehensive vulnerability risk management capabilities. It allows organizations to enhance the security of their hybrid environments without the need for multiple tools or additional costs associated with cloud-specific functionality. Cloud Configuration Assessment utilizes a library of rule checks that align with the complete CIS AWS Foundations Benchmark, incorporates best-practice assessments from AWS, and incorporates Rapid7's proprietary checks. These checks are applied to your configuration data to evaluate its compliance with established policies and standardsIt determines compliance per rule on a “Pass” or “Fail” basis.Each rule is also ranked based on its severity level so you know what to work on now and what can wait. Furthermore, this feature offers a visual representation of potential misconfiguration risks, presenting them alongside risks associated with the broader IT environment. This holistic view provides users with a comprehensive understanding of their real-time exposure to various risks. For users looking to focus on specific areas of their business, the tool allows for targeted searches and filters to identify misconfigurations based on specific criteria. This streamlines the remediation process. As your team addresses these issues, InsightVM provides interactive dashboard reports to visually track progress.. With Cloud Configuration Assessment, your security teams will be able to more effectively collaborate with your AWS team to resolve misconfigurations and reduce the likelihood of compromise.

5.3.6 Missing or incorrect tags •

Tags can be applied to resources to help identify and categorize resources



Example in AWS: •

aws:cloudformation:logical-id



aws:cloudformation:stack-id



aws:cloudformation:stack-name

You can utilize the Resource Tags feature to assign tags to resources, aiding in their identification and categorization. Please note that AWS CloudFormation supports tagging only for specific resources. To learn more about the resources that can be tagged using AWS CloudFormation, refer to the relevant documentation. © Secbay Press

294 | Page

Secbay Press

5.3.7 Insufficient capacity Compute The computing portion of a cloud resource is made up of numerous components, including ● ● ● ● ●

CPUs in physical form Total number of processors The total number of cores Themes (also known as hyper threading) CPUs (virtual) (virtual CPUs)

When you construct a virtual computer, you can specify these values. For example, when building a virtual machine on AWS, you may choose the number of CPUs, threads, and virtual CPUs.

Storage A storage resource is anything that is utilized to store data in the cloud. There are three kinds of cloud storage resources: Block storage: This sort of storage resource is generally used for compute resources (virtual machines in particular) to store data, including the computer resource's operating system. Consider a block storage resource, such as a hard drive or a partition, if you are used to noncloud storage devices. AWS Elastic Block Storage (EBS), Microsoft Azure Blob Storage, and Google Cloud Persistent Disks are a few examples. Block storage is often raw storage space that requires a file system to be installed on the storage resource to be utilized. This is common throughout the virtual machine installation procedure. File storage: This storage resource functions similarly to a network file system (It is also commonly known as network-attached storage, or NAS.). When you need to share files with numerous resources, this form of storage is employed. For example, two virtual computers may require access to the same files (or share files between the two VMs). AWS Elastic File System (EFS), Azure Files, and Google Cloud Filestore are all instances of file storage solutions. Object storage: Object storage is a feature that allows objects (unstructured data such as emails, videos, pictures, text, or any other type of data) to be saved in the cloud. Rather of using typical file system storage characteristics, object storage arranges data into "groups" (like a folder in a file system). Data is often accessed using a URL, like how you would visit a web page.

© Secbay Press

295 | Page

Secbay Press Bandwidth issues The topic of optimizing network bandwidth is vast. Bandwidth is a numerical measurement that reflects the amount of data that can be moved through a network in each amount of time. This number is often expressed as bits per second (or megabytes per second or gigabits per second). Bandwidth in cloud computing can be linked with a virtual private cloud (VPC) or with a single resource. After you've calculated your bandwidth requirements, you can simply ensure that your cloud network or resource has enough bandwidth. However, while this strategy may be rightsizing the network in its current form, it is not a really optimized network, which means you may be paying for more bandwidth than is actually necessary. There are other strategies you may take to lessen the demand for bandwidth, such as the ones listed below: Optimize the flow of traffic: With this strategy, you are looking for ways to restrict the amount of network traffic that occurs within your cloud network. For example, if you have a database that is actively utilized by a web server, putting these two services on the same subnet rather than different subnets will reduce total network traffic. Use network shaping: By prioritizing network traffic, you may ensure that more critical network traffic is not harmed by traffic volume. One option is to utilize bandwidth throttling, which limits network access to certain resources. Use load balancing: By deploying groups of servers across multiple network segments or subnets, you may use load balancers to distribute traffic across networks while limiting bandwidth utilization on any one network. Updates to the schedule during off-peak hours: Updates frequently necessitate the use of bandwidth to download updates from the Internet or internal update servers to the servers in the VPC/VNet. The amount of bandwidth consumed can influence network speed in some instances, especially if you have many resources. You may calculate this period by analyzing the metrics supplied by your network monitoring tools.

Oversubscription If you're a regular flyer, you're undoubtedly accustomed to hearing statements like this: "We're searching for passengers who are willing to accept a later flight in return for a voucher for future travel." This is the outcome of an airline practice known as overbooking, in which the airline books more people than there are seats since, historically, some travelers cancel their flights at the last minute. This booking strategy allows an airline to optimize the usage of available seats, even if it occasionally necessitates the placement of certain passengers on other flights. Cloud providers will employ a similar strategy. When a compute node hosts several resources (virtual machines, databases, and so on), cloud providers will provide more hardware resources than are physically practical to deliver in order to maximize income for that compute node. Cloud providers are effectively wagering that customers will not use all of the actual or virtual hardware resources allocated to a cloud instance, therefore the provider will "overbook" the compute node's © Secbay Press

296 | Page

Secbay Press resources. Resource Contention And Starvation As you learned in Chapter 8, a good baseline needs to be established so that you have solid data on what is normal. Once you have data on what is considered to be normal, you can monitor to see whether there are any issues with current measurements such as contention for resources. As your cloud activity grows, you may, at some point, run out of system resources. A resource starvation issue will usually manifest itself as a slowdown in performance as the application contends for what has now become a scarce resource. Resource contention is conflicting access to resources such as CPU, network bandwidth, random access memory or storage. This leads to resource starvation where services are denied access to these pools. To troubleshoot limited resource problems, it is helpful that you have set up object monitoring on critical systems and display the metrics on a dashboard. Also, thresholds can be programmed to trigger alarms when, for example, CPU utilization exceeds 90 percent for five minutes. Common resources that may saturate over time are CPU, memory, storage, storage network bandwidth, and network capacity. To resolve contention and starvation issues, it usually comes down to throttling the system consuming the resource or having to add capacity. If you add capacity, you can decide to use autoscaling to dynamically add or subtract resources based on current load or to manually upgrade virtualized resources for greater capacity Storage Capacity One cloud resource that always seems to grow and grow is storage. Storage volume utilization can be tracked as a managed object, and triggers can be configured to alert operations at a predefined usage threshold. Managing storage capacity is a critical task; if a storage volume fills up, applications can fail, and data can be lost. Watch this closely! To resolve storage capacity problems, you can migrate the data to a larger volume, or if you have elastic storage volumes, increase the capacity. Variance in Number of Users When collecting data for your baselines and ongoing operations, the number of concurrent users on the platform is a useful metric to track. Usage can show you how many users are visiting your site over time and display valuable information on peaks and valleys of usage by the hour, day, and month. Site activity can, of course, be an important measurement for planning additional capacity. When troubleshooting cloud performance issues, the current number of users on the system can be an important indicator of load. When there are high user counts, you can track what parts of your deployment are most heavily used and track the load on all the devices to see where the congestion points are. Also, tracking users and the variance from your baseline is critical to managing software license models based on user counts. Many identity access control systems have maximum user hard limits that need to be monitored and managed.

© Secbay Press

297 | Page

Secbay Press

5.3.8 Licensing issues •

Licensing manifests by sudden failure after a system has been working



IaaS and PaaS allow you to run your own Software



SaaS the provider is responsible for licenses



License capacity issues- number of users, connections, named users, concurrent connections, metrics

Licensing issues often manifest themselves as a sudden failure after everything has been working fine all year. After a quick investigation, you see in the log files that the application’s license has expired! The service model will determine how to start troubleshooting a licensing issue. As you learned, the IaaS and PaaS models allow you to install and run your own applications. Since you own the application, you are responsible for the licensing that allows you to use it. With SaaS, the cloud provider is responsible for the application, and you pay for its use. As such, the SaaS provider is responsible for the software license, so the provider will manage all licensing-related issues. You did follow best practices and document how licensing is managed between your company and your application vendor, right? This will be extremely handy when troubleshooting licensing-related outages. There can be a remarkable level of complexity when licensing application software. The vendors offer a bewildering number of options, and no two companies are alike. Software licenses for cloud deployments may be different than the structure for the same application running in your private data center. One may base its license on the total number of users, named users, or concurrent connections. It is common to find licenses for each processor that is used or the number of cores. There is more, licenses can also be based on actual usage measured by object monitoring. License capacity can be measured in many ways, such as the number of users, total connections, named users, concurrent connections, or usage metrics. The license capacity responsibility is dependent on the service model. If it is an IaaS or PaaS model, you will most likely be responsible for the licensing, and if it is a SaaS model, the cloud service provider will take care of all licensing issues. The bottom line is that you should track license consumption and keep ahead of capacity requirements. Security certificates, while not specifically licensing, also expire. This can be a catastrophic event because any service, such as HTTPS, VPNs, storage, and communication encryptions and authentication systems that rely on the certificate, will also fail. Keep track of all certificate expiration dates and renew them prior to expiration.

5.3.9 Vendor-related issues Migrations of vendors or platforms Vendor difficulties might be connected to differing platforms amongst suppliers your onpremises solution), such as those that can lead to vendor lock-in. You may also encounter challenges such as differing storage or database solutions supplied by the vendors to which © Secbay Press

298 | Page

Secbay Press you are moving.

Integration of vendors or platforms During this study, you will first solve any issues that may arise when connecting your onpremises systems with the cloud solutions that you are implementing. Integration, on the other hand, is not a one-time event. You will be adding, uninstalling, and replacing on-premises solutions in the future, and they will very certainly need to interface with your existing cloud architecture. Furthermore, if you delete, add, or alter an existing cloud resource, you must ensure that it does not negatively influence your on-premises infrastructure. Deploying a blue-green environment in both your on-premises and cloud architecture is one technique to address any integration issues. This allows you to run tests and resolve issues before they are deployed in a production setting.

API request limits An application programming interface (API) is a mechanism that allows a client and a server to communicate in a well-known way. A client will send an API request to a server, which will usually use representational state transfer (REST). One possible difficulty with API queries is that cloud suppliers may impose a restriction on the number of API calls that may be made. Because you’re licensing plans may limit the number of API queries that may be made in a certain period of time, this number may be related with billing expenses. What happens when you hit your breaking point? Most of the time, this leads to additional requests being denied. If the server to which the API is being routed is mission important, refused requests might have a substantial impact on cloud performance. In some circumstances, if you exceed your API limit, you may be charged for each additional request. Going above the limit can have far-reaching consequences.

Cost or billing issues With a subscription model, you often do not have to pay for the resources you have used until the billing period ends (typically a month). If you use a subscription model and aren't careful, you might end up with a hefty payment at the conclusion of the billing term. Cost should not be underestimated. Because of a sudden cloud vendor charge, several smaller firms have found themselves in an extremely tough financial situation. Even larger firms are occasionally taken aback by the high cost. Remember that billing occurs automatically, and in many circumstances, the bill is handled by a credit card charge. What happens if the transaction exceeds the credit card's spending limit? If the fee is denied, your cloud services may be immediately suspended. If your cloud resources are mission vital, a service freeze may result in much more money lost in revenue. The following are best practices for avoiding this problem: ●

For your cloud bills, use a separate credit card.

© Secbay Press

299 | Page

Secbay Press ●

Set a reminder for yourself a few days before the payment cycle finishes to check the cloud bill and the available credit on your credit card.



Pay down or off your credit card bill on a regular basis.

Many applications have dependencies on others. As you go through your change management process of applying patches, hot fixes, upgrades, and feature enhancements, you may encounter mismatch problems between applications. The application vendor may have done regression tests as part of its quality assurance processes and then posted the test results for you to review. If there are any problems with versioning, you can tell well ahead of time and plan accordingly. Investigate all documentation and carefully read all release notes. A lot of planning up front here can save you from many problems later. However, if you make changes to your cloud deployment and then your systems no longer communicate with each other or to the outside world with browser issues, for example, then you may have a critical situation that requires immediate troubleshooting. Collect all log files and any symptoms and document what was changed and what the corresponding symptoms were. Make sure that all interested parties, such as the virtualization, networking, security, storage, or application teams, are involved in the troubleshooting. Also, check that all of the required features have been enabled and that their licensing requirements are valid.

5.4 Given a scenario, troubleshoot connectivity issues. 5.4.1 Network security group misconfigurations Remember that a network security group is intended to defend a network (VPC). These groups serve as virtual firewalls for your cloud network. They are not, however, intended to safeguard the entire VPC, but only the virtual machine (VM) resources. You may build network ACLs to protect the whole VPC (access control lists).

ACL A misconfigured network ACL might result in either blocking or allowing access to resources, which is not intended. Remember that the sequence of the rules is critical, and that not putting the rules in the proper order might lead to unexpected results. For example, imagine you wish to enable all systems from the 192.168.100.0/24 network to join to the subnet, but not machine 192.160.100.200 from that subnet. To achieve this purpose, a rule set like the one below would be appropriate.

Inheritance NACLs from other rule sets may be inherited in some instances. If this is the case, thoroughly comprehend the rules provided by the inherited rule set. Understand that modifications to the inherited rule set might have an influence on your entire rule set if it is genuine inheritance (sometimes it is simply a copy of an existing rule set). Because inherited rule sets are often implemented first, there are additional issues to consider when utilizing this functionality.

© Secbay Press

300 | Page

Secbay Press Validating Firewall and Proxy Configurations After new firewall rules have been implemented, your change management plan will call for you to validate that the rule is operating as expected. A firewall rule will have a protocol, source and destination IP addresses or domain names, application port numbers, a permit or deny statement, and usually a firewall source and destination zone. Firewall rules are compared in a top-down operation. When there is a match, the rule is acted upon, and further rule processing is stopped. Be aware that this order of operations can produce unexpected results. For instance, if you add the new rule at the bottom of the configuration that permits all Internet traffic to connect to an File Transfer Protocol (FTP) server on one of your public-facing cloud servers, check to see whether there is not another rule above this preventing FTP access to your site that gets checked before your new rule. In this case, the FTP deny rule would be seen first, and the connection would be blocked. Since there is no more rule processing that would be done, your permit FTP rule will never be seen. The solution would be to place the FTP permit higher in the search sequence than a deny rule. Enabling logging can provide invaluable troubleshooting information when troubleshooting a firewall as the data provided in the logs will specify exactly what the firewall is processing and give details on the rules it referenced. A proxy is a device that is inserted into the middle of a traffic flow and terminates the connections in both directions and monitors the traffic between the source and the destination. Proxy systems are found in enterprise environments and in load balancers in the cloud. When troubleshooting proxy issues, make sure that the browser or client is not misconfigured for the proxy settings and has a valid IP address, subnet mask, and default gateway. Verify that the URL of the site you are trying to reach is not down and that there are no ongoing network issues that may be the root cause of the problem. Check the log files for the device performing the proxy function that may indicate if the issue is local to the server. Test from another computer on the same subnet and a different subnet that connects through the same proxy server. Verify whether the problems are consistent or intermittent. By gathering the background information of the proxy issue, you can then work to determine where the problem resides and develop a test and resolution plan.

5.4.2 Common networking configuration issues A great variety of things can go wrong in every network system. In this part, you'll learn about some of the most typical networking configuration concerns. It is important to note that hundreds of various faults might create difficulties in any of the network types listed in this section. It is crucial to note that the goal of this test is to focus on "common" networking difficulties that are dependent on "configuration," rather than all networking issues that might originate from numerous types of problems.

Peering Within your cloud network, you may come into instances when you need resources in one VPC to interact with resources in another VPC. This form of communication is not usually enabled by default. If the resources have public IP addresses, they may be able to communicate; however, this communication may be prevented by security group restrictions or network ACL rules. Because network packets are sent toward the Internet first, then back into the security group, VPC, and finally © Secbay Press

301 | Page

Secbay Press the target resource, communication may experience latency. Peering (also known as VPC peering or virtual network peering) is the establishment of a network link between two VPCs. This connection enables resources in the two VPCs to communicate as if they were on the same network. As a result, resources can communicate over the cloud network rather than the Internet. When one VPC gets connected with the peering connection, one typical issue develops in terms of troubleshooting. Another issue to watch for is if one of the VPCs is transferred to a different region, which would almost certainly need the establishment of a new peering connection.

Incorrect subnet •

Application servers commonly grouped on same subnet and by VLAN



All devices can send information to each other



Subnet mask determines total number of hosts on the network



If subnet masks don’t match, you will get an address mismatch

In the cloud or a private data center, application servers are commonly grouped together in the same IP subnet and VLAN. By architecting the network in this manner, you ensure that the devices can send data directly to each other on the network and not have to traverse a router interface to a remote network. This decreases the network workload and increases performance since all communications remain local. For security reasons, when servers are grouped together in the same subnet, efficient firewall rules can be created to restrict and permit what traffic is allowed to and from the subnet. The IP address contains two sections. The first bits in the 32-bit address block identify the network number, and the remaining bits are the hosts, or devices, inside that network. The addressing is flexible and can be adjusted using what is known as a subnet mask. The mask determines the total number of hosts on each network. All hosts will contain this mask information to determine which network they are on. The subnet masks must exactly match, or you will have network connectivity problems with the incorrectly configured resources. Your documentation will contain your addressing scheme, show the subnets in use, and provide you with the correct subnet mask and address blocks. With this information, you can investigate the interface configurations and verify that the IP address, subnet mask, and default gateway are correct.

Incorrect IP address •

IP addressing is required for devices to communicate on a network



When deploying to the cloud there is validation of IP address schemes



IP addresses must be in the correct range for the subnet they are sitting on



Each device must have a unique IP

© Secbay Press

302 | Page

Secbay Press IP addressing is required for all devices connected to the network to be able to communicate. When deploying a new presence in the cloud, part of the engineering, design, and architecture process includes creating an IP address scheme. At implementation time, a validation test will verify that the addressing is configured correctly and is operational. When IP addressing issues arise after a network has been deployed, troubleshooting should include asking whether there were any changes made recently. If so, investigate what was changed and whether that could have created the issues you are troubleshooting. IP addresses must be in the correct range for the subnet they are residing on. If they are out of range, they will not be able to communicate with devices inside or outside of the network. Also, some addresses in the subnet will be reserved for networking services. This is especially true for the default gateway’s IP address, which, by convention, is the lowest number in the range. The block range and reserved IP addresses are integral parts of your documentation. One of the more common addressing issues occurs when two devices on the same network are configured with the same address. This is not valid as each device must have a unique address. Just think of the confusion if the same phone number were assigned to more than one person! The symptoms of duplicate IP assignments are intermittent communications. Using the ping utility, verify that you are getting a 100 percent response rate from a host on the same subnet. If it is anything less, investigate a duplicate IP address issue. You can also look at the ARP table and see if a MAC address mapped to an IP address changes to another address. This indicates that an IP address is assigned to two or more devices and is an invalid configuration.

Incorrect IP space Many cloud resources that are issued with an IP address employ dynamic assignment through a DHCP server. Normally, the cloud provider provides this server (or, for on-premises solutions, managed by your organization). Static IP assignment, in which you manually configure the IP address, subnet mask, and other network parameters, may be appropriate in some cases. This setup is necessary to guarantee that servers have a stable IP address. However, when you manually adjust IP settings, you face the danger of making mistakes. You may enter an incorrect address, subnet, IP space, or other networking setup information. This may cause connection troubles.

Routes A route is how a resource communicates with resources outside of the local network. This connection is routed through a router (also called a gateway). A poorly defined route can prevent network packets from being sent beyond the local network. Make use of the route command. Default In rare circumstances, a resource may be in a network with many routers linked to it. Static An administrator configures static routing to route network traffic through certain routers or gateways. If a static route is specified, it does not change until the administrator explicitly modifies the settings. © Secbay Press

303 | Page

Secbay Press Dynamic Dynamic routing allows routing pathways to alter dynamically based on algorithms that determine the optimum route to travel. While more difficult than static routing, dynamic routing is more resilient since losing connectivity to a router results in the selection of a new routing path. Static routes If you employed the Google Cloud Console to establish a Classic VPN tunnel utilizing either policy-based routing or a route-based VPN, the console automatically generated static routes for the remote traffic selectors. Additional details can be found in the documentation on Cloud VPN networks and tunnel routing. Dynamic routes Dynamic routes are managed by Cloud Routers in the VPC network. Their destinations always represent IP address ranges outside your VPC network, received from a BGP peer. VPC networks do not consider destinations provided by Cloud Routers when these destinations meet either of the following conditions: • •

The destination precisely matches an IP address range of a subnet. The destination is encompassed by (has a more specific subnet mask than) an IP address range of a subnet.

It is acceptable for a dynamic route's destination to contain (to have a shorter subnet mask than) a subnet IP address range. In that case, packets are only sent to the dynamic route's next hop if they do not fit within the subnet route's destination. The broadest destination possible is 0.0.0.0/0. System-generated default route Upon creating a VPC network, it automatically comes with a default route generated by the system.This default route serves two purposes: •

It defines the path out of the VPC network, including the path to the internet. In addition to having this route, instances must meet additional requirements if they need internet access.



It provides the standard path for Private Google Access.



The system-generated default route is assigned a priority of 1000. It is designed to have the most extensive possible destination (0.0.0.0/0). Google Cloud employs this route only when a packet does not match a more specific destination route. For more details on how destination specificity and route priority impact route selection, please refer to the documentation.



In scenarios where you wish to completely disconnect your network from the internet or need to substitute the default route with a custom one, you can delete the default route. Here are the steps:



If your intention is to direct internet traffic to a different next hop, you can replace the default route with a custom static or dynamic route. As an example, you can replace it with a custom static route that uses a proxy VM as the next hop..

© Secbay Press

304 | Page

Secbay Press •

If you remove the default route and do not replace it, packets destined to IP ranges that are not covered by other routes are dropped.

Firewall Incorrectly administered micro-segmentation refers to the implementation of micro-segmentation security measures in a way that is flawed, ineffective, or inconsistent with the intended security goals. Microsegmentation is a security strategy that involves dividing a network into smaller segments or zones and applying strict access controls and security policies to each segment. When micro-segmentation is administered incorrectly, it can lead to various security issues, such as: 1. Overly Permissive Rules: One common mistake is setting up access rules that are too permissive, allowing unnecessary communication between segments. This defeats the purpose of micro-segmentation, as it fails to isolate and protect critical assets. 2. Inadequate Rule Testing: Failure to thoroughly test and validate access rules can result in gaps or misconfigurations that leave vulnerabilities open. Without proper testing, security administrators may overlook critical rules or make errors in rule creation. 3. Lack of Monitoring: Micro-segmentation requires ongoing monitoring and management to ensure that security policies remain effective. Failure to monitor network traffic and adjust rules as needed can lead to security blind spots. 4. Misclassification of Assets: Incorrectly identifying and classifying assets within segments can lead to misconfigured security policies. If assets are not properly categorized, they may receive inappropriate levels of access or protection. 5. Complexity Overload: Overly complex micro-segmentation policies can be difficult to manage and lead to errors. Simplicity and clarity in policy creation and enforcement are essential for effective micro-segmentation. 6. Lack of Documentation: Proper documentation of micro-segmentation policies and configurations is crucial for maintaining security and troubleshooting issues. Without documentation, it can be challenging to understand and audit the network's security posture. 7. Neglecting Updates: Failing to update micro-segmentation policies as the network evolves, new assets are added, or new threats emerge can render existing policies ineffective. To avoid incorrectly administered micro-segmentation, organizations should invest in proper planning, testing, monitoring, and documentation. Regular security audits and assessments can help identify and rectify any misconfigurations or weaknesses in micro-segmentation implementations. Validating Firewall and Proxy Configurations After new firewall rules have been implemented, your change management plan will call for you to validate that the rule is operating as expected. A firewall rule will have a protocol, source and destination IP addresses or domain names, application port numbers, a permit or deny statement, and usually a firewall source and destination zone. Firewall rules are compared in a top-down operation. When there is a match, the rule is acted upon, and © Secbay Press

305 | Page

Secbay Press further rule processing is stopped. Be aware that this order of operations can produce unexpected results. For instance, if you add the new rule at the bottom of the configuration that permits all Internet traffic to connect to an File Transfer Protocol (FTP) server on one of your public-facing cloud servers, check to see whether there is not another rule above this preventing FTP access to your site that gets checked before your new rule. In this case, the FTP deny rule would be seen first, and the connection would be blocked. Since there is no more rule processing that would be done, your permit FTP rule will never be seen. The solution would be to place the FTP permit higher in the search sequence than a deny rule. Enabling logging can provide invaluable troubleshooting information when troubleshooting a firewall as the data provided in the logs will specify exactly what the firewall is processing and give details on the rules it referenced. A proxy is a device that is inserted into the middle of a traffic flow and terminates the connections in both directions and monitors the traffic between the source and the destination. Proxy systems are found in enterprise environments and in load balancers in the cloud. When troubleshooting proxy issues, make sure that the browser or client is not misconfigured for the proxy settings and has a valid IP address, subnet mask, and default gateway. Verify that the URL of the site you are trying to reach is not down and that there are no ongoing network issues that may be the root cause of the problem. Check the log files for the device performing the proxy function that may indicate if the issue is local to the server. Test from another computer on the same subnet and a different subnet that connects through the same proxy server. Verify whether the problems are consistent or intermittent. By gathering the background information of the proxy issue, you can then work to determine where the problem resides and develop a test and resolution plan.

Network address translation (NAT) - VPN - Source - Destination To access the Internet, one public IP address is needed, but we can use a private IP address in our private network. The idea of NAT is to allow multiple devices to access the Internet through a single public address. To accomplish this, it is necessary to convert private IP addresses into public IP addresses. Network Address Translation (NAT) is a method by which one or more local IP addresses are transformed into one or more global IP addresses, and vice versa, to enable local hosts to access the Internet. Additionally, it involves the translation of port numbers, where the port number of a host is substituted with another port number in the packet destined for routing to its final location. Subsequently, corresponding entries of IP addresses and port numbers are recorded in the NAT table. NAT typically functions on routers or firewalls. Depending on the cloud services you have implemented, you may be responsible for many network-related technologies such as IP addressing, Virtual Private Networks (VPN) or direct interconnections, local routing configurations, and network services such as Domain Name System (DNS) and dynamic Host Configuration Protocol (DHCP). Network Access Tunneling And Encryption When interconnecting networks, especially over a public network such as the Internet, it is mandatory that you encrypt all traffic from the source to the destination. Encryption can be a complex topic, as you have learned throughout this book. Use the most current encryption types and transforms such as AES-256 to protect your data. Cloud providers

© Secbay Press

306 | Page

Secbay Press commonly offer encrypted VPN connections from your facility to your cloud fleet of services. Use their best practices and follow the guidelines that they publish online. You will need this information when configuring and testing a VPN connection. Sometimes the terminology can be confusing. It is mandatory that you read the manual to make this work

Load balancers • • • • • •

Methods – Internal HTTPS, External HTTPS, TCP, UDP Headers – helps determine the original host Protocols Encryption – SSL Load balancer, does encryption/decryption Backends-describes how traffic is distributed Frontends- uses virtual IP to accept requests from a client

This table documents load balancing methods supported by backend service-based network load balancers (currently in Preview). Target pool-based network load balancers round robin among all instances in the target pool or backup pool.

In practical terms, being "regional" implies that even though the backend service operates globally, opting for the Standard Tier necessitates regional external forwarding rules and external IP addresses. Moreover, the backend instance groups or network endpoint groups (negs) linked to the global backend service must be situated in the identical region as the forwarding rule and IP address. For additional details, please refer to the provided resource.. Global versus regional load balancing Use global load balancing when your backends are distributed across multiple regions, your users need access to the same applications and content, and you want to provide access by using a single anycast IP address. Global load balancing can also provide IPv6 termination. Use regional load balancing when your backends are in one region, and you only require IPv4 termination. External versus internal load balancing Google Cloud load balancers can be divided into external and internal load balancers: ●

External load balancers manage incoming internet traffic to your Google Cloud Virtual Private Cloud (VPC) network. Global load balancing necessitates the use of the Premium Tier of Network Service Tiers, while regional load balancing can operate with the Standard Tier.



Internal load balancers are responsible for distributing traffic to instances within Google Cloud.



BACKEND: A backend service defines how Cloud Load Balancing distributes traffic. It encompasses various configuration settings such as the protocol for connecting to backends, distribution and session parameters, health checks, and timeouts. Additionally,

© Secbay Press

307 | Page

Secbay Press it includes SSL Proxy Load Balancing settings. ●

FRONTEND: A front-end load balancer is a server that utilizes a virtual IP address to accept client requests, determines the most appropriate reverse proxy server using a specified scheduling algorithm, and forwards requests to the corresponding reverse proxy server. DNS records



Many different networks in the cloud.



Load balancers, firewalls, IDS/IPS, DNS, DHCP



Can query with nslookup or dig

Domain Name System There are many different network services in the cloud. These can include load balancers, firewalls, IDSs/IPSs, and DNS and DHCP devices, to name a few. DNS will need regular configuration updates as hosts and devices are added to, and removed from, the network. “A” records that map the domain name to the IP address need to be changed every time a device’s name has been modified, deleted, or added. Although numerous cloud providers provide DNS services, the routine management of DNS is typically your responsibility. Troubleshooting will consist of verifying that the current configuration of the DNS records matches the addressing in each server. If there is a mismatch, devices trying to reach a host in your cloud fleet by its domain name may be offered an incorrect IP address that causes the requesting device to contact the incorrect server. Use the nslookup or dig utility to query the DNS server and verify that it is returning accurate information for the record you are troubleshooting. Also, to make sure that the DNS servers are reachable on the network, run a standard ping test. If the ping test fails, a traceroute to the server will show where along the path the traffic is stopping.

VLAN/VXLAN/GENEVE • • • • • •

Layer 2 segmentation to place physical switches and divide them into separate logical switches IEEE 802.1D for VLAN , dot1q Maximum of 4094 VLANs per system Many VLANS are reserved When Hyper scale cloud arrived, it was clear VLAN was not sufficient. VXLAN- encapsulation method that takes Ethernet frame and encapsulates it in an IP packet using UDP

VLANs and VxLANs are layer 2 segmentation standards that allow you to take physical switches and logically divide them into many separate or individual logical switches. The IEEE 802.1D, often referred to as Dot1Q, defines VLANs that can have a maximum of 4,094 VLANs per system. This may seem like a large number, but many VLANs are reserved, and often the switches in the data center may not be able to support this maximum number. Dot1Q works by inserting a header in the Ethernet frame that identifies the VLAN that this frame belongs to. Also included in the header is the priority field that is used for QoS calculations.

© Secbay Press

308 | Page

Secbay Press When hyperscale cloud computing arrived, it was clear that the standard VLAN limit of 4,094 was insufficient. Even if every customer used just one VLAN (hint, they use many), the limit would be only around 4,000 customers! To increase the VLAN count, VxLAN was introduced. Also, traditional VLANs were intended to stay local to the data center and not be “stretched” across to a remote data center. VxLAN radically changes the way VLANs transit a network. VxLAN is an encapsulation method. It takes an Ethernet frame and encapsulates it in an IP packet using User Datagram Protocol (UDP) as its transport. Another name for this is MAC-in-IP encapsulation (some call it MAC-in-UDP) because the layer 2 frame is untouched and wrapped in a normal IP/UDP packet. The VxLAN header is 8 bytes, or 64 bits. The VxLAN Network Identifier (VNI) uses 24 bits of that header. The VNI is synonymous with the traditional VLAN identifier discussed. In the VxLAN case, however, the VNI scales to more than 16 million segments. These are private segments too. Show me a cloud provider that doesn’t want 16 million segments belonging to millions of customers! The traditional VLAN ID maps to the VxLAN ID. For ease of use, some networks map similar numbers to a higher numerical number in VxLAN. The significance in VxLAN is the unique VxLAN ID. Each switch respects the mapping when encapsulating or removing the encapsulation. This helps cloud providers significantly, as there are more than 16 million VxLAN IDs. This solves the VLAN ID starvation aspect discussed earlier. Automation, routing, and specialized monitoring systems track VLAN/VxLAN configurations and operations. It is important to know that layer 2 VLANs will traditionally map to a layer 3 IP subnet. When troubleshooting, make sure that these mappings are correct and that the links, or trunks, that interconnect network switches have the VLANs configured on them to enable them to pass traffic on that VLAN from one switch to another. VxLAN troubleshooting will almost always fall to the cloud provider as this is considered infrastructure. However, by analyzing log files, vendor monitoring applications, and command-line utilities, you can uncover where the issue originates from and begin troubleshooting.

Proxy •

Device inserted in middle of traffic flow and terminates connections in both directions



Found in enterprise environments and load balancers in the cloud



Proxy misconfiguration is URL is down



Check log files for Proxy Configuration

A proxy is a device that is inserted into the middle of a traffic flow and terminates the connections in both directions and monitors the traffic between the source and the destination. Proxy systems are found in enterprise environments and in load balancers in the cloud. When troubleshooting proxy issues, make sure that the browser or client is not misconfigured for the proxy settings and has a valid IP address, subnet mask, and default gateway. Verify that the URL of the site you are trying to reach is not down and that there are no ongoing network issues that may be the root cause of the problem. Check the log files for the device performing the proxy function that may indicate if the issue is local to the server. Test from another computer on the same subnet and a different subnet that connects through the same proxy server. Verify whether the problems are consistent or intermittent. By gathering the background information of the proxy issue, you can then work to determine where the problem resides and develop © Secbay Press

309 | Page

Secbay Press a test and resolution plan.

Maximum transmission unit (MTU) •

Maximum Transmission Unit



The standard Ethernet frame called Maximum Transmit Unit (MTU) is 1,518 bytes.



Defines the largest Ethernet frame size that can be transmitted into the network.



Frames that are larger than the MTU are fragmented, or divided, into multiple frames to support the standard frame size.



Jumbo frame up to 9,000 bytes, allow for high performance reducing overhead.



Troubleshoot MTU- look at interface statistics for evidence of fragmentation.

The standard Ethernet frame called Maximum Transmit Unit (MTU) is 1,518 bytes, which defines the largest Ethernet frame size that can be transmitted into the network. Frames that are larger than the MTU are fragmented, or divided, into multiple frames to support the standard frame size. A jumbo frame is any Ethernet frame that exceeds the standard size. Using a larger Ethernet frame size than the standard Ethernet MTU inside the data center is often more efficient because it reduces networking overhead. Jumbo frames enable higher network performance by minimizing overhead in each Ethernet frame using fewer, larger frames. Additionally, jumbo frames decrease the frequency of CPU interruptions for Ethernet traffic processing, as each jumbo frame can be up to six times larger than a standard frame. Jumbo frames are now common in the cloud and enterprise data centers and are extensively used for storage over LAN technologies such as Internet Small Computer Systems Interface (iSCSI) and Fiber Channel over Ethernet. Modern data center switches will usually support jumbo frames up to 9,000 bytes. To support jumbo frames in the network, you will need to enable this feature on the switches, Network Interface Cards (NIC), and any other devices in the data path. To troubleshoot MTU issues, look at the interface statistics to see whether there is fragmentation taking place. If the MTU of a frame is too large, a router interface may fragment it into smaller frames. This is not an optimal situation, and you would need to verify that all devices in the source to destination path are configured for jumbo frames. The most common jumbo setting on network devices is 9,000 bytes, which is the standard frame size for storage traffic over the network.

Quality of service (QoS) •

Quality of Service-Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet



Can take application needed into consideration



Differentiation of services, video, voice



TCP/IP headers have fields that tell networking devices how QoS is configured



You can control QoS inside a datacenter but not over the Internet.

© Secbay Press

310 | Page

Secbay Press When a network becomes congested, you can configure quality of service (QoS) parameters that allow for the prioritization of certain traffic types over others. This section covers QoS as is required for the Cloud+ exam and as it pertains to cloud computing in general. Quality of Service (QoS) Many different types of traffic are passing through your cloud data network. Not all of this traffic is created equally. Different types of applications have very different network requirements. Actions such as transferring files or sending an e-mail message are not delay-sensitive and have no issues with being buffered because of congestion as they traverse the network from source to destination. However, other types of traffic are susceptible to latency and jitter across the network. Collaboration applications send voice and video over the same network as all other traffic. However, if voice is delayed or a video conference is losing data and suffering long delays in the network, the quality will suffer and could ultimately cause the session to fail. The concept of quality of service addresses these issues. Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet. For instance, real-time voice and video traffic is highly time-sensitive, storage traffic demands a lossless connection, while email and file transfers are less affected by network delays. Using QoS, the network can be configured to take the various application needs into consideration and determine the optimal ordering of traffic through the network. TCP/IP headers have fields that tell the networking devices how their QoS values are configured. Routers can also be configured to look at the port numbers or IP addresses to configure QoS in a network. The access layer switches can either honor the QoS settings in a frame or impose their own settings by modifying or stamping QoS values into the frame. Each device that the frame passes through must be configured to honor the QoS settings inside the frame, so the configuration can become complex. You can control QoS inside the data center but not over the Internet since the Internet backbone is beyond your administrative control. Also, if the network links are not saturated, then there is little need for QoS since there is no contention and ample bandwidth to transmit all traffic. However, when the links become saturated, then the QoS configurations can prioritize certain traffic flows over others. For example, the long file transfers that take place during a backup can involve a large amount of data moving across the network, but it is not time-sensitive on the delivery. Also, when you send an e-mail, you are not concerned if it is sent in real time. However, if you are talking on a telephone that uses Voice over IP, you will most certainly notice delays of more than 250 milliseconds. If you are meeting over a video conference, jitter or latency in the network can cause the screen to freeze or drop. If there is sufficient network capacity, there is available bandwidth for all traffic types. However, when the network interconnections reach 100 percent traffic load, then QoS is needed to arbitrate which traffic flows will take priority over the others. In normal operations, traffic is forwarded on a first-come, firstserve basis. QoS may be configured, but there is no current need to prioritize traffic until the network saturates. You must also consider that the background control of the network, such as the routing protocols and the management applications, must command a high priority because if they are starved for bandwidth, the whole network could very well fail.

© Secbay Press

311 | Page

Secbay Press QoS can be configured on each step of the network that will define the priority of the traffic being sent across the network. There are many variations to QoS, and it can be a quite complex topic. Understand that each packet has information in its headers that you can use to make QoS decisions, and the network devices can be configured to act on this information and favor one type of traffic over another.

Time synchronization issues • • • • •

Quality of Service-Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet Can take application needed into consideration Differentiation of services, video, voice TCP/IP headers have fields that tell networking devices how QoS is configured You can control QoS inside a datacenter but not over the internet

When a network becomes congested, you can configure quality of service (QoS) parameters that allow for the prioritization of certain traffic types over others. This section covers QoS as is required for the Cloud+ exam and as it pertains to cloud computing in general. Quality of Service Many different types of traffic are passing through your cloud data network. Not all of this traffic is created equally. Different types of applications have very different network requirements. Actions such as transferring files or sending an e-mail message are not delay-sensitive and have no issues with being buffered because of congestion as they traverse the network from source to destination. However, other types of traffic are susceptible to latency and jitter across the network. Collaboration applications send voice and video over the same network as all other traffic. However, if voice is delayed or a video conference is losing data and suffering long delays in the network, the quality will suffer and could ultimately cause the session to fail. The concept of quality of service addresses these issues. Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet. For example, voice and video traffic are real time and delay-sensitive, storage traffic requires a lossless connection, and mail and file transfers are not sensitive to network delays. Using QoS, the network can be configured to take the various application needs into consideration and determine the optimal ordering of traffic through the network. TCP/IP headers have fields that tell the networking devices how their QoS values are configured. Routers can also be configured to look at the port numbers or IP addresses to configure QoS in a network. The access layer switches can either honor the QoS settings in a frame or impose their own settings by modifying or stamping QoS values into the frame. Each device that the frame passes through must be configured to honor the QoS settings inside the frame, so the configuration can become complex. You can control QoS inside the data center but not over the Internet since the Internet backbone is beyond your administrative control. Also, if the network links are not saturated, then there is little need for QoS since there is no contention and ample bandwidth to transmit all traffic. However, when the links become saturated, then the QoS configurations can prioritize certain traffic flows over others. For example, the long file transfers that take place during a backup can involve a large amount of data moving across the network, but it is not time-sensitive on the delivery. Also, when you send an e-mail, you are not concerned if it is sent in real time. However, if you are talking on a telephone that uses Voice © Secbay Press

312 | Page

Secbay Press over IP, you will most certainly notice delays of more than 250 milliseconds. If you are meeting over a video conference, jitter or latency in the network can cause the screen to freeze or drop. If there is sufficient network capacity, there is available bandwidth for all traffic types. However, when the network interconnections reach 100 percent traffic load, then QoS is needed to arbitrate which traffic flows will take priority over the others. In normal operations, traffic is forwarded on a first-come, firstserve basis. QoS may be configured, but there is no current need to prioritize traffic until the network saturates. You must also consider that the background control of the network, such as the routing protocols and the management applications, must command a high priority because if they are starved for bandwidth, the whole network could very well fail. QoS can be configured on each step of the network that will define the priority of the traffic being sent across the network. There are many variations to QoS, and it can be a quite complex topic. Understand that each packet has information in its headers that you can use to make QoS decisions, and the network devices can be configured to act on this information and favor one type of traffic over another. Troubleshooting Cloud Networking Issues Networking is of paramount importance in the cloud, much like in traditional data centers and corporate setups. A malfunctioning network can lead to the degradation or failure of connected services. Given that networks are the backbone of any organization, it's imperative to promptly and efficiently address any network-related problems as they emerge. Cloud providers will be responsible for the core network in their facilities. This includes the connections to the Internet and high-speed fiber links that interconnect cloud zones and regions. The backbone switching and routing inside the data centers will also be managed by the cloud companies. The separation of responsibilities will be outlined in the service level agreement, with associated uptime commitments. Depending on the cloud services you have implemented, you may be responsible for many networkrelated technologies such as IP addressing, Virtual Private Networks (VPN) or direct interconnections, local routing configurations, and network services such as Domain Name System (DNS) and dynamic Host Configuration Protocol (DHCP). In the all too likely event of a network impairment or outage, you must be able to determine where the problem is originating from and who has the ultimate responsibility for resolution. In this section, you will learn about identifying issues as they arise.

5.4.3 Network troubleshooting tools Ping •

Ping is based on the ICMP protocol



Verify that a device is available



Echo Request and Echo Reply

© Secbay Press

313 | Page

Secbay Press

Ping is part of the TCP/IP family of protocols; it is used to verify that a device is available and reachable on the network and to get a reading of the response time at that moment in time. You can send a ping packet to a remote IP address and have it return, as shown in the following example. This will tell you that the IP stack is configured correctly and that you can reach IP devices on both the local and remote networks. Ping is a fundamental and frequently used troubleshooting tool for verifying network connectivity. Ping is useful in verifying that the remote device is reachable and finding out if there is any packet loss from the source to the destination and back. Also, ping will show the network delays for that moment in time.

tracert / traceroute tracert –Linux Traceroute- Windows • • • •

Used for network path troubleshooting Tracert- Windows Traceroute- Linux Determine whether routing is working

© Secbay Press

314 | Page

Secbay Press

tracert/traceroute The tracert/traceroute utilities are useful for network path troubleshooting. The traceroute utility displays the routed path a packet of data takes from source to destination. You can use it to determine whether routing is working as expected or whether there is a route failure in the path. In the following code snippet, a traceroute command details the path taken out of the local network and across the Internet, along with the latency for each router hop taken to a remote destination. If a DNS name is associated with the IP address, the traceroute utility can resolve the IP address to help you identify the device that the trace utility is taking.

flushdns ipconfig/ifconfig/ip Ipconfig- Windows Ifconfig, ip- Linux •

Command line utilities to verify and configure local network interfaces.



Can be used IPv4, IPv6



Ip command

© Secbay Press

315 | Page

Secbay Press

Ipconfig on Windows and ifconfig on Linux are command-line utilities used to verify and configure the local network interfaces. In the following example, this workstation is running both IP versions 4 and 6, and the addressing is provided in the command output. These utilities’ output can be expanded by using switches to obtain specific and detailed information. These are important troubleshooting tools to run to verify that the network interfaces are configured as expected. ifconfig is the Linux utility to view and change network interface configurations.

nslookup/dig Nslookup- Linux and Windows Dig- Linux Only •

Command line utilities to verify and configure local network interfaces



Can be used IPv4, IPv6

© Secbay Press

316 | Page

Secbay Press

nslookup/dig nslookup and dig are command-line utilities used to resolve hostnames to IP addresses using a DNS server. nslookup is the Windows variant, and its Linux equivalent is called dig. If you need to learn the IP address of a domain, use these applications to resolve the DNS name to the IP address, as shown here: dig is a Linux-based utility that serves the same function. It queries a DNS system by giving a domain name and receives the corresponding IP address. While dig has extensive command options, the basic command returns a remarkable amount of information, as shown here:

netstat/ss Netstat- Windows and Linux Ss- Linux only



Netstat- Command line utilities to verify and configure local network interfaces.



Can be used IPv4, IPv6



Ss command displays stats for TCP, UDP, RAW, etc

© Secbay Press

317 | Page

Secbay Press

netstat netstat is a network statistics utility found on both Windows and Linux workstations and servers. You can use netstat when troubleshooting to see which network connections are open to remote applications, to view detailed protocol information, to see addresses used both locally and remotely, and to determine which state the TCP connections are currently in on the device. The basic netstat output is illustrated here: There are many options for the netstat utility, including the one shown here where you’re asking for TCP network statistics. By combining netstat options, you can get granular and detailed output like this: C:\Users\todd>netstat -ps The ss command-line utility can display stats for the likes of PACKET, TCP, UDP, DCCP, RAW, and Unix domain sockets.

Route Route- Windows and Linux Route print- shows the routing table on windows. •

Route command assists with troubleshooting network reachability issues.



Local routing tables, not the network routing table

© Secbay Press

318 | Page

Secbay Press

route The route command-line utility shown in the following example can assist you in troubleshooting network reachability issues. Use this utility when troubleshooting to look at a local workstation’s or server’s local routing tables. This is local to the device only and does not give you visibility into the actual network routing table.



ARP Address Resolution Protocol

ARP command- Windows and Linux •

Map IP to MAC address

© Secbay Press

319 | Page

Secbay Press ARP Have you ever wondered how a workstation can find a device on a network if it does not have complete addressing information needed to reach that device? Well, if it is a TCP/ IP device (as just about everything is nowadays), it will use the Address Resolution Protocol (ARP). ARP is the protocol that determines the mapping of the IP address to the physical MAC address on a local network. By using ARP, all devices on the LAN build a table of IP to MAC address bindings. For example, a workstation may need to communicate with a server at IP address 192.168.1.123. If it is in the same VLAN, the workstation will need to know the server’s physical, or MAC, address to construct an Ethernet frame with accurate source (workstation) and destination (server) MAC addresses. The workstation will send out an ARP broadcast frame to all devices in the VLAN requesting the MAC address of the 192.168.1.123 server. All devices on the local LAN will process the ARP request, and only the device that matches the 192.168.1.123 IP address will respond with its hardware address. When each device on the segment receives and looks into the ARP packet, it will ask, “Who has IP address 192.168.1.123?” If the receiving device is configured with that address, it will reply with “That is my IP address, and my Ethernet MAC address is attached.” Then communications can proceed because the device that sent the ARP packet now has all the information it needs to communicate with the remote device. Address resolution is constantly running in the background and rarely seen unless you have a packet sniffer attached to the network. To verify that the network or host you are logged into can see the remote device’s IP/MAC address, check the ARP tables when troubleshooting. The arp command-line utility will show information on the local ARP table, as shown in the following output from a Windows server:

Curl •

CURL- Command line tool to transfer data to or from a server using any of the supported protocols.



curl provides a wide array of valuable features, including support for proxies, user authentication, FTP uploads, HTTP posting, SSL connections, cookies, resuming file transfers, Metalink, and numerous others. As you'll discover below, the extensive feature set might leave you pleasantly surprised.

curl is a command line tool to transfer data to or from a server, using any of the supported protocols (HTTP, FTP, IMAP, POP3, SCP, SFTP, SMTP, TFTP, TELNET, LDAP or FILE). curl is powered by Libcurl. This tool is preferred for automation, since it is designed to work without user interaction. May 15, 2019 Syntax: curl [options] [URL...]URL: The most basic uses of curl is typing the command followed by the URL. curl https://www.geeksforgeeks.org This should display the content of the URL on the terminal. The URL syntax is protocol dependent and multiple URLs can be written as sets like: curl http://site.{one, two, three}.com URLs with numeric sequence series can be written as:

© Secbay Press

320 | Page

Secbay Press Packet capture Packet analyzer TOOLS: • • • • • •

TCPDUMP Tshark Wireshark Packet capture Protocol Analyzer They are two separate components

While Packet sniffers have legitimate uses in monitoring and troubleshooting a network, they have also been widely used by hackers for gaining unauthorized access to a network and stealing information. ... A protocol analyzer captures and analyzes signals and data traffic over a communication channel. Tcpdump • tcpdump enables a Linux system to actively capture network traffic, serving as a valuable tool for monitoring and troubleshooting. Often referred to as "sniffing," tcpdump permits users to establish filters to select specific traffic of interest for diagnostic purposes. Think of tcpdump as a command-line network analysis tool. • Wireshark stands as the world's leading and widely adopted network protocol analyzer. It provides a microscopic view of network activities, allowing you to observe precisely what's transpiring on your network. • TShark serves as a network protocol analyzer as well, facilitating the capture of packet data from a live network or reading packets from a preexisting saved capture file. It offers the option to either print decoded packet information to the standard output or save the packets to a file. TShark's native capture © Secbay Press

321 | Page

Secbay Press file format is pcapng, which is also utilized by Wireshark and various other tools. By default, when no specific options are configured, TShark behaves much like tcpdump. It utilizes the pcap library to capture traffic from the first accessible network interface and presents a summary line for each received packet on the standard output.

5.5 Given a scenario, troubleshoot common performance issues. 5.5.1 Resource utilization CPU The central processing unit (CPU) is a chip found on the main circuit board. It is inserted into a particular area known as a socket. As the name indicates, its role is to handle instructions issued by the operating system or programs running on the OS. The CPU is the computer's or server's brain. The CPU determines how quickly computational processes are executed. You won't have direct control over the CPU unless you have access to a dedicated server in a cloud environment. Instead, the hypervisor will communicate with the CPU and offer your virtual system with a vCPU. When you construct a virtual machine, one of the things that will affect the cost of running the virtual machine is the number of vCPUs provided by the hypervisor. This figure is calculated by multiplying the number of CPU cores by the number of threads per core. While "more is better" in the sense that more vCPUs result in a quicker virtual machine, you should also note that more costs more. Remember the goal of this chapter: "Given a situation, set the proper compute size for a deployment." This means you must select a vCPU count that will allow your apps to function at an acceptable pace without paying for an excessive number of vCPUs.

GPU Some apps demand a significant amount of computing power to show visuals. You could, for example, create an application that takes an architect's plan and provides complete representations of the structure. Video processing can also consume a significant amount of CPU resources. A GPU is a processor that is specifically built to handle graphics rendering data. This processor may not be required for a cloud-based virtual machine in some circumstances since you may not have any apps that show such data. In other circumstances, though, this may be a significant trait. As a result, several cloud providers now supply instance classes with huge GPU processing capacity.

© Secbay Press

322 | Page

Secbay Press Storage For the CompTIA certification test, you should be knowledgeable of various distinct storage system aspects. Compression, deduplication, thin provisioning, thick provisioning, and replication are among them. These characteristics are often classified into one of two categories: Cost savings: When you utilize cloud storage, you only pay for the space that you use (in some cases, typically object storage, you also pay for the process of transferring the data, such as when you download data). You will pay less if you utilize less storage space. Data loss prevention: One of the nicest aspects of cloud storage is the ability to avoid data loss.

Network latency Network latency represents the duration required for a network packet to traverse from its source to its destination. Higher latency values have a significant influence on service performance as well as user experience. Network delay has various components, including: Transmission delay: The time it takes for a packet to be transferred over a network from the sender's side. A processing delay is a pause on the receiver's end. Before the packet is transmitted to the system, it must be processed when it arrives (checked for errors, determine the destination port, and so on). Queuing delay: After processing, the packet is sent to a queue and held there until the system is ready to use it. A huge queue might cause latency to rise.

Replication This conversation focused on how critical it is to have timely backups in order for a business to reach its RPO targets. That talk solely discussed timely data backup, which is critical in circumstances where data is damaged or lost. But what if a complete instance or resource is lost? What if, for example, a virtual machine hosting a vital web server break and cannot be recovered? In these cases, you should have one or more duplicate instances in place so that you may utilize one as a fallback. You must duplicate the active resource frequently enough to support your RPO to have a legitimate redundant resource. For example, if the important web server has an RPO of 6–12 hours, you must choose a replication mechanism that updates the redundant systems at least once every 12 hours.

5.5.2 Application Application setup entails supplying resources to the application so that it can accomplish its job. This section will teach you about two types of application configuration troubleshooting: memory management and service overload. © Secbay Press

323 | Page

Secbay Press Memory management Applications, like other computational activities, require memory to store information. When you install an application in a cloud environment, you also allocate it a certain amount of memory. However, this condition may not match the demands of the application. Because most cloud environments have tools for monitoring an application's memory consumption, viewing the metrics for memory utilization is a good method to troubleshoot this type of issue. You may also configure alerts to be triggered when memory use reaches a certain threshold, giving you prior notice of possible problems.

Service overload An application service overload can occur when an application consumes more CPU or vCPU resources than it is allotted. Most cloud environments, like memory resources, allow you to monitor vCPU consumption. Debugging this issue is similar to troubleshooting a shortage of memory: identify whether the program can be adjusted to require fewer or more vCPU resources.

5.6 Given a scenario, troubleshoot automation or orchestration issues 5.6.1 Account mismatches •

Many problems can be traced to process, planning, implementation, validation.



Issues tend to get worked over time



Account mismatch- users and groups in the cloud console



If privileges don’t match what is needed to do their jobs

Many problems can be traced back to process, planning, implementation, and validation deficiencies. These issues tend to get worked out over time as you gain experience operating in a cloud environment. In the following sections, you will become familiar with some of the more common processes and workflow issues. With a basic understanding of these issues, you should be able to deal with them more effectively should they ever arise. Account Mismatch Accounts include users and groups that are defined in the cloud management console. Once a user is defined, the user can be either placed into a group of other users with the same job function, such as storage operators, or assigned privileges directly. Should these privileges not match what is required for either the user or group of users to do their jobs, you may be called upon to troubleshoot the issue. You need to determine what rights are needed. Are they valid rights for these users and groups to assume, and are they correctly defined in the identity and © Secbay Press

324 | Page

Secbay Press access console in the cloud? Since this can be a security issue, it is important that you make sure all of the approvals are in place and then verify, via the console, that the accounts are correctly configured for their intended roles.

5.6.2 Change management failures •

What happens when two changes are approved for the same night and the network gets taken down at the same time patches are being installed?



CM requires a written plan because anything can and do go wrong.



After Action Reports

What happens when two changes are approved for the same night and the same time but the network guys take their downtime at the same time the server team is trying to install patches across the network? Well, nothing happens for the server team until the network engineers bring up the LAN so the servers can reach remote resources. This is just one of many examples of the hidden complexities of the change management process. There are many dependencies, and they all must be investigated to make sure that one operation does not have a negative impact on other changes planned at the same time. As you have already learned, change management requires a written plan that includes all contingencies as well as participating in change review meetings to discuss upcoming changes. During the change window, you must be prepared to react to anything that can go wrong. For example, the server team must be able to react if the network is down and cannot access the needed files. If the team is in the middle of upgrading the servers and the network goes down, there needs to be a contingency plan to roll back any of the changes if required. Afterward, action reports and reviews can be useful to discuss what went wrong, why, and what can be done to prevent a recurrence of the same issue from happening again.

5.6.3 Server name changes •

DNS and Server Name Changes- sometimes workflow problems are subtle



If you fail to make a DNS entry, the server will not be reachable until you know it’s IP



Nslookup utility can be handy for these problems

Sometimes workflow problems are subtle but important changes that need to be made but are easy to overlook. Once you forget these types of issues once, I guarantee that you will always remember them going forward! DNS maps human-readable domain names to the IP addressing scheme used by the Internet. It’s just like a phone directory that tells you Bob is at 555-1212. So, if you, for example, change a server’s IP address or add a new load balancer to your deployment, there will be a DNS change or addition. Since most services and users call the devices by their domain name, you must have defined the DNS entry. If you failed to make the DNS record entry, then the device will not be reachable unless you happen to know its IP address.

© Secbay Press

325 | Page

Secbay Press Keep this in mind when troubleshooting connectivity issues, primarily if there are timeouts or services that just never respond to connection requests. The nslookup utility really comes in handy for troubleshooting DNS problems; it allows you to query a DNS server using a command line where you enter the domain name and the DNS server replies with the IP address or an error message if there is no record. This allows you to see whether there is anything in the DNS server’s database for that domain name or whether the information is inaccurate.

5.6.4 IP address changes •

Changing the IP addressing scheme requires careful planning and validation



DNS records, access control lists



Testing and Validation Process as well as how to back out change

IP Address Changes Changing your IP addressing scheme requires careful planning and thorough validation after the changes have been implemented. You just learned about the importance of keeping the DNS records up to date. In addition to DNS, you must also verify that any access control lists that reference the old IP address be modified to reflect the new address. This is also the case for firewall rules and network routing tables. Also, the devices connected, such as servers or storage devices that have IP addresses assigned to their interfaces, will need to be reconfigured and tested. This project can often become a larger than expected undertaking. A carefully outlined change management plan will be invaluable in defining the steps required during an address change readdressing project, including the testing and validation process and, if needed, how to back out.

5.6.5 Location changes •

Moving a deployment from one location to another



Most cloud providers have multiple availability zones and can move services automatically from one to the other.



Common in the SaaS model



If you’re deploying in house applications, it’s up to you.



Migrating between Service Providers and Deployment Models is most challenging.

Moving a deployment to another location can be completely transparent or a huge change management project! Most cloud providers have multiple availability zones in their cloud regions and can move services and servers automatically from one to the other should there be an outage. This level of resiliency is commonly automated in the SaaS model. If you are deploying your own applications, it is often up to you to design for changes. While every cloud provider is different, they all have methods and processes to allow you to change locations with little or no impact to your production operations. Should the plan be to move services from one region to another, then you can expect some downtime to

© Secbay Press

326 | Page

Secbay Press shut down services in one region and bring them up in another. These types of changes should be wellplanned and tested prior to implementation to make sure all goes well. Possibly the most complex location change is migrating between service providers or deployment models. For example, moving from your company’s internal data center to a public cloud’s SaaS offering is a significant change. One of the advantages of the public cloud is the ability to turn up a test fleet of servers and then delete it when you are finished. You pay a minimal amount of money to build a test bed prior to deployment. You should take advantage of this flexibility and completely test your migration offline well in advance of making the changes. This allows you to fully plan and document location changes and helps to ensure a smooth transition.

5.6.6 Version/feature mismatch •

Many applications have dependencies on others.



CM process of hotfixes, upgrades, feature enhancements



Application vendor may have versioning problems.



Collect log files and symptoms

Many applications have dependencies on others. As you go through your change management process of applying patches, hot fixes, upgrades, and feature enhancements, you may encounter mismatch problems between applications. The application vendor may have done regression tests as part of its quality assurance processes and then posted the test results for you to review. If there are any problems with versioning, you can tell well ahead of time and plan accordingly. Investigate all documentation and carefully read all release notes. A lot of planning up front here can save you from many problems later. However, if you make changes to your cloud deployment and then your systems no longer communicate with each other or to the outside world with browser issues, for example, then you may have a critical situation that requires immediate troubleshooting. Collect all log files and any symptoms and document what was changed and what the corresponding symptoms were. Make sure that all interested parties, such as the virtualization, networking, security, storage, or application teams, are involved in the troubleshooting. Also, check that all of the required features have been enabled and that their licensing requirements are valid.

5.6.7 Automation tool incompatibility Automation solutions have grown in popularity in recent years as automation has become increasingly crucial in the IT sector. Because of its growing popularity, there has been a significant growth in the number of automation tools that perform a range of duties. The availability of many automation tools is advantageous since it provides a wide range of alternatives. This figure, however, provides difficulty to cloud suppliers in a cloud environment. Because cloud companies cannot handle a huge number of alternatives, they frequently limit the possibilities. This implies that if you are migrating to the cloud and are already utilizing an on-premises automation solution, you may need to transfer to a new automation platform as well. © Secbay Press

327 | Page

Secbay Press Migrating to a new tool is a more difficult undertaking than it appears. It's important to remember that many of these automation solutions have extremely specialized functionality and setup formats. Chef and Anisole,

Deprecated features When a developer of a tool, language, or utility decides to deprecate a feature rather than totally remove it from the product, the developer frequently chooses to deprecate the functionality rather than completely remove it. Deprecated indicates "we left it in there, but we no longer suggest using it, and we retain the right to delete it in a future version." A deprecated feature may be scheduled to be removed in the future version, but in other circumstances, it may remain in the product for many years. In other words, it's critical to recognize when a functionality you're utilizing has been deprecated. By examining a product's release notes, you may identify which features are deprecated and devise a strategy for avoiding the deprecated features.

API version incompatibility The APIs used to interact with a server, like any other software program, will evolve over time. This is usually caused by changes in the server software. New APIs are introduced when new functionality is added. When a feature is modified, a new version of the API is made available. When a feature is deprecated or withdrawn from the server, the API is either updated (deprecated features frequently result in a warning when used) or deleted. It is critical to keep up with changes in software, just as it is with any other development. Coverage of API changes should be included in the server's release notice. Review release notes and ensure that your API requests are for the proper version of the server's APIs to avoid problems. •

Automation troubleshooting is difficult



API version incompatibility



Deprecated features

What could possibly go wrong with a fully automated system where just the click of an icon unleashes a host of cloud automation scripts? Well, many things can go wrong! Not only that, automation often hides the complexity from the user and does not always report results to the person using these automated services. The ability to troubleshoot automation-related problems comes down to whether you control the process or another party such as the cloud provider does. If the automated systems are part of the backend virtualization services embedded in the cloud provider’s operations, then it is beyond your reach to troubleshoot the actual issue. However, when using these services, it is critical that you validate the end result is what you expect. Look at your original deployment plan and what you actually configured in the deployment and compare that to what you see as running services. If there is a discrepancy, go back and make sure that you requested what was actually deployed. Is there a difference between the input and the output? If the input, or requested, configuration is what you expect but the deployed services are

© Secbay Press

328 | Page

Secbay Press different, it could very well be that the automation software did not work as expected. Log files can be collected on all API calls to a device or service; these can offer valuable information on the success or failure of the scripts used. A well-written script can also validate the changes and roll back if there was a script failure. Monitoring dashboards can also be configured to monitor the results of the automated systems in a graphical manner. Since it is most likely that you do not have access to the underlying automation code, your only real option here is to open a trouble ticket with the cloud provider for investigation. Also, the frequently asked questions (FAQ) or support discussion forums may have additional information on the issue you are troubleshooting. If it was your organization that developed the automation applications, then you will have much greater flexibility in troubleshooting. Are there log files that were generated? Did the script fail before completion? Did an API call fail? By investigating the software and stepping through its execution, you, or your development team, should be able to identify the issue internally to your organization.

5.6.8 Job validation issue •

When performing CM you must learn that change was implemented correctly



Was validation accurate in the first place?



Testing to make sure feature upgrade is in place after the change



Peer review can help

Work Validation Issues When performing a change management operation, you have learned that you must then test to make sure the change was implemented correctly and is operating as expected. This is commonly referred to as validations. This is an important operation as it is critical that the change management process is followed and that when systems get put back into production, they operate as expected. However, how do you know if the validation was accurate and complete in the first place? What if you performed the validation and learned after the system went live that there was a fatal flaw? The lesson here is that you need to make sure that the validation plan is complete, thorough, and accurate. Testing for proper operation will include testing to make sure the intended fix or feature upgrade is now in place after the change. You can also ask the organization publishing the fix what their testing and validation methodology was. You can then incorporate their test suites into your validation documentation. A common practice is to also add tests to verify that what was originally working still works! Look at closely related features to the ones that you changed and run several tests to make sure they are operating as they were before the upgrades. Peer review can be helpful when creating validation plans. It is good practice to sit down with your peers and mark up the validation plans to constantly improve them. Over time you will build a library of validation test plans that are reusable.

© Secbay Press

329 | Page

Secbay Press

5.6.9 Patching failure •

Many different networks in the cloud.



Load balancers, firewalls, IDS/IPS, DNS, DHCP



As you survey your cloud solution, don’t forget to check out these components for patching:

• Hypervisors • Virtual machines/operating systems • Virtual appliances • Networking components • Applications • Storage components • Clusters Time Considerations •

Quality of Service-Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet



Can take application needed into consideration



Differentiation of services, video, voice



TCP/IP headers have fields that tell networking devices how QoS is configured



You can control QoS inside a datacenter but not over the Internet

When a network becomes congested, you are able to configure quality of service (QoS) parameters that allow for the prioritization of certain traffic types over others. This section covers QoS as is required for the Cloud+ exam and as it pertains to cloud computing in general. Quality of Service Many different types of traffic are passing through your cloud data network. Not all of this traffic is created equally. Different types of applications have very different network requirements. Actions such as transferring files or sending an e-mail message are not delay-sensitive and have no issues with being buffered because of congestion as they traverse the network from source to destination. However, other types of traffic are susceptible to latency and jitter across the network. Collaboration applications send voice and video over the same network as all other traffic. However, if voice is delayed or a video conference is losing data and suffering long delays in the network, the quality will suffer and could ultimately cause the session to fail. The concept of quality of service addresses these issues. Quality of service (QoS) is a general networking term for the ability of the network to provide differentiated services based on information in the Ethernet packet. For example, voice and video traffic are real time and delay-sensitive, storage traffic requires a lossless connection, and mail and file transfers are not sensitive to network delays. Using QoS, the network can be configured to take the various application needs into consideration and determine the optimal ordering of traffic through the network. TCP/IP headers have fields that tell the networking devices how their QoS values are configured. Routers

© Secbay Press

330 | Page

Secbay Press can also be configured to look at the port numbers or IP addresses to configure QoS in a network. The access layer switches can either honor the QoS settings in a frame or impose their own settings by modifying or stamping QoS values into the frame. Each device that the frame passes through must be configured to honor the QoS settings inside the frame, so the configuration can become complex. You can control QoS inside the data center but not over the Internet since the Internet backbone is beyond your administrative control. Also, if the network links are not saturated, then there is little need for QoS since there is no contention and ample bandwidth to transmit all traffic. However, when the links become saturated, then the QoS configurations can prioritize certain traffic flows over others. For example, the long file transfers that take place during a backup can involve a large amount of data moving across the network, but it is not time-sensitive on the delivery. Also, when you send an e-mail, you are not concerned if it is sent in real time. However, if you are talking on a telephone that uses Voice over IP, you will most certainly notice delays of more than 250 milliseconds. If you are meeting over a video conference, jitter or latency in the network can cause the screen to freeze or drop. If there is sufficient network capacity, there is available bandwidth for all traffic types. However, when the network interconnections reach 100 percent traffic load, then QoS is needed to arbitrate which traffic flows will take priority over the others. In normal operations, traffic is forwarded on a first-come, firstserve basis. QoS may be configured, but there is no current need to prioritize traffic until the network saturates. You must also consider that the background control of the network, such as the routing protocols and the management applications, must command a high priority because if they are starved for bandwidth, the whole network could very well fail. QoS can be configured on each step of the network that will define the priority of the traffic being sent across the network. There are many variations to QoS, and it can be a quite complex topic. Understand that each packet has information in its headers that you can use to make QoS decisions, and the network devices can be configured to act on this information and favor one type of traffic over another.

© Secbay Press

331 | Page

Secbay Press

Practice Questions QUESTION NO: 1 You perform a REST call and receive a response code of 400. What does this response code mean? A. Forbidden B. Unauthorized C. Bad request D. Method not allowed QUESTION NO: 2 Which of the following are considered security appliances in the cloud? A. IPS B. NAC C. SMA D. WFA The following are examples of security appliances in the cloud: IPS, IDS, NAC, and WAF. The other answers are not valid terms for this topic. QUESTION NO: 3 You perform a REST call and receive a response code of 503. What does this response code mean? A. Service unavailable B. Bad gateway C. Not implemented D. Internal server error QUESTION NO: 4 You perform a REST call and receive a response code of 400. What type of response is this? A. Success B. Client-side error C. Informational D. Server-side error

© Secbay Press

332 | Page

Secbay Press QUESTION NO: 5 Which of the following protocols lack encryption? A. SSH B. NFS C. SNMPv2 D. SFTP QUESTION NO: 6 Most organizations follow a _____ Privilege policy in which user accounts are given only the access that the user needs. A. Restricted B. Minimal C. Least D. Limited QUESTION NO: 7 _____ is the process in which a regular user account gains privileged access. A. Privilege granting B. . Rooting C. Escalation D. Enablement QUESTION NO: 8 Which of the following are not considered secure encryption ciphers? A. AES with CBC B. 3DES C. AES 256 in Galois Counter Mode D. SHA384 QUESTION NO: 9 You perform a REST call and receive a response code of 403. What does this response code mean? A. Method not allowed B. Forbidden C. Bad Request D. Unauthorized

© Secbay Press

333 | Page

Secbay Press QUESTION NO: 10 Which of the following could be a reason why a user would have a problem with privileged access? A. Missing the privileged credentials B. Issues with keys C. Incomplete setup D. All of these answers are correct QUESTION NO: 11 Access to cloud resources may be granted via _____ rather than the traditional username/password combination. A. locks B. tokens C. keys D. markers QUESTION NO: 12 Which of the following are issues you may encounter when troubleshooting keys and certificates? A. Expired B. Wrong protocol C. Revoked D. All of these answers are correct QUESTION NO: 13 _____ is a software build tool. A. Git B. Gradle C. Ansible D. Docker E. Chef QUESTION NO: 14 _____ is a metric that is a measurement of the delay caused by the application as well as the time it takes to traverse the network between the client and the server. A. Round trip time B. Latency C. Server response time D. Jitter © Secbay Press

334 | Page

Secbay Press QUESTION NO: 15 When developers of a tool, language, or utility decide to remove a feature from the product, they often decide to _____ it rather than remove it completely. A. flag B. strike C. deprecate D. tag QUESTION NO: 16 Which of the following would be considered vendor-related issues? A. Cost or billing issues B. API request limits C. Integration of vendors or platforms D. All of these answers are correct QUESTION NO: 17 Which of the following are configuration management tools? A. Docker B. Ansible C. Gradle D. Chef

QUESTION NO: 18 _____ is the time it takes for a packet to be set to another system and the response time of the return packet. A. Round-trip time B. Server response time C. jitter D. latency QUESTION NO: 19 Which of the following would be a valid use of a tag? A. None of these answers are correct B. To create a user account C. To create a role account D. All of these answers are correct © Secbay Press

335 | Page

Secbay Press QUESTION NO: 20 Which of the following could have an effect on the availability of cloud resources? A. NAC settings B. WAF settings C. IP settings D. All of these answers are correct QUESTION NO: 21 A tag is _____ that is associated with a cloud resource. A. a record B. a resource C. a device D. metadata QUESTION NO: 22 A(n) _____ is a technique that is used to provide a well-known communication method between a client and a server. A. IMP B. PMI C. TOK D. API QUESTION NO: 23 Which of the following could result in network performance degradation? A. Repeater B. Incorrectly provided password C. DHCP configuration errors D. Increased bandwidth usage QUESTION NO: 24 _____ is a platform for managing containers. A. Docker B. Git C. Chef D. Gradle E. Ansible

© Secbay Press

336 | Page

Secbay Press QUESTION NO: 25 Network _____ refers to delays in communication across a network. A. latency B. round trip time C. server response time D. jitter

Answers to Practice Questions 1. Answer: C Explanation: REST error codes: 400-Bad Request, 401-Unauthorized, 403 Forbidden, 404-Not found, 405-Method not allowed

2. Answer: A,B Explanation:

3. Answer: A Explanation: REST error codes: 500-Internal server error, 501-Not implemented, 502-Bad gateway, 503Service unavailable,

4. Answer: B Explanation:Codes from 100–199 are for informational purposes and are not normally used for troubleshooting problems. Codes from 200–299 are successful responses. Codes from 300–399 are for when an API call is redirected to another resource. Codes from 400–499 are client-side errors. Codes from 500–599 are server-side errors.

5. Answer: B,C Explanation: The following protocols lack encryption: FTP (File Transfer Protocol), Telnet, NFS (Network File System), SNMPv2 (Simple Network Management Protocol), HTTP

© Secbay Press

337 | Page

Secbay Press 6. Answer: C Explanation: Most organizations follow a Least Privilege policy in which user accounts are given only the access that the user needs. The goal is to prevent users from having access to privileged operations that they are not required to have, but this can result in problems in which a user ends up with not enough access. The other answers are not valid terms for this topic.

7. Answer: C Explanation: Escalation is the process in which a regular user account gains privileged access. The method used to perform the escalation can vary quite a bit. For example, if a user is working on a Linux virtual machine, that user might use the su or sudo command to escalate his or her account to gain privileged access. Or, if working with cloud-based accounts, policies could be used to escalate a user account. The other answers are not valid terms for this topic.

8. Answer: A,B Explanation: Be aware of ciphers that are now considered by the encryption industry to be insecure and avoid using these ciphers: DES encryption cipher, 3DES encryption cipher, RC4 encryption cipher, AES with CBC encryption cipher

9. Answer: B Explanation: REST error codes: 400-Bad Request, 401-Unauthorized, 403 Forbidden, 404-Not found, 405-Method not allowed

10. Answer: D Explanation: There are many reasons why a user may end up with problems with privileged access, including missing the privileged credentials, incomplete setup, issues with keys, and issues with the escalation process.

11. Answer: C Explanation: Access to cloud resources may be granted via keys rather than the traditional username/password combination. The other answers are not valid terms for this topic.

12. Answer: A,C Explanation: You may encounter several issues when troubleshooting keys and certificates, including © Secbay Press

338 | Page

Secbay Press the following: expired, revoked, trust, compromised, and misconfigured.

13. Answer: B Explanation: Gradle is a software build tool. Chef and Ansible are configuration management tools. Git is a software version control tool. Docker is a tool for managing containers.

14. Answer: C Explanation: Server response time is a metric that is a measurement of the delay caused by the application as well as the time it takes to traverse the network between the client and the server. Network latency refers to delays in communication across a network. Round trip time is the time it takes for a packet to be set to another system and the response time of the return packet. Jitter is a value that describes a difference in the delay between packets.

15. Answer: C Explanation: When developers of a tool, language, or utility decide to remove a feature from the product, they often decide to deprecate it rather than remove it completely. The other answers are not valid terms for this topic.

16. Answer: D Explanation: Vendor-related issues can include migrations of vendors or platforms, integration of vendors or platforms, API request limits, and cost or billing issues.

17. Answer: A,D Explanation: Chef and Ansible are configuration management tools. Git is a software version control tool. Gradle is a software build tool. Docker is a tool for managing containers.

18. Answer: A Explanation: Round-trip time is the time it takes for a packet to be set to another system and the response time of the return packet. Network latency refers to delays in communication across a network. Server response time is a metric that is a measurement of the delay caused by the application as well as the time it takes to traverse the network between the client and the server. Jitter is a value that describes a difference in the delay between packets.

© Secbay Press

339 | Page

Secbay Press 19. Answer: A Explanation: A tag is metadata that is associated with a cloud resource. Tags are used for a variety of purposes, including: Creating cost reports, Access control, Automation operations, Operational support, Security risk management.

20. Answer: A Explanation:Review cloud network configurations to determine that a network configuration isn't the cause of the problem. This includes IP settings and security appliance settings, like NAC or WAF settings.

21. Answer: D Explanation: A tag is metadata that is associated with a cloud resource. The other answers are not valid terms for this topic.

22. Answer: D Explanation: An API is a technique that is used to provide a well-known communication method between a client and a server. The other answers are not valid terms for this topic.

23. Answer: A,D Explanation: Network performance degradation can be caused by a number of different problems, including Increased bandwidth usage, Poor physical connections, Malfunctioning networking devices, DNS issues, Internet service provider (ISP) issues

24. Answer: A Explanation: Docker is a platform for managing containers. Chef and Ansible are configuration management tools. Git is a software version control tool. Gradle is a software build tool.

25. Answer: A Explanation: Network latency refers to delays in communication across a network. Round trip time is the time it takes for a packet to be set to another system and the response time of the return packet. Server response time is a metric that is a measurement of the delay caused by the application as well as the time it takes to traverse the network between the client and the server. Jitter is a value that describes a difference in the delay between packets.

© Secbay Press

340 | Page

Secbay Press

Full Length Practice Questions 1. Your company has decided to adopt cloud storage for long-term financial records archiving. What kind of cloud service model is this, exactly? Your company has decided to adopt cloud storage for long-term financial records archiving. What kind of cloud service model is this, exactly? A. CaaS B. IaaS C. SaaS D. PaaS 2. Your company's developers are constantly improving numerous online applications. Manual deployment of cloud-based virtual machines, web server configurations, and back-end databases is currently required. What should be used to make this process more efficient? A. Cloud bursting B. Configuration management C. Orchestration D. Operating system imaging 3. What are the advantages of using public cloud PaaS for developers? A. Quicker time to market B. Wider selection of programming tools than an on-premises equivalent solution C. More control than an on-premises equivalent solution D. No dependency on network connections 4. Which type of cloud model does cloud data replication belong to? A. IaaS B. CaaS C. PaaS D. Saas 5. Which cloud data option offers the highest level of availability? A. Replicas B. SLA C. BLOBs D. Virtual machines 6. What user key is used to decrypt EFS data? A. Symmetric B. Private C. Asymmetric D. Public

© Secbay Press

341 | Page

Secbay Press 7. The storage requirements for a two-disk mirror of a server operating system disc partition are being planned. In the mirrored volume, how much disc space is available? A. 30 percent B. 75 percent C. 10 percent D. 50 percent 8. Which term best defines the time it takes for a hard disc platter to spin to its proper position? A. Rotational latency B. Transfer rate C. IOPS value D. Seek time 9. Which of the following statements refers to a disadvantage of using SSDs? A. They are slower than hard drives. B. They are more expensive than hard disk drives. C. They do not withstand vibrations as well as hard disk drives. D. They are louder than hard disk drives. 10. Which of the following terms does hierarchical storage management refer to? A. Virtualization B. Storage tier C. Fibre Channel D. iSCSI 11. What kind of storage disc has no moving parts? A. NAS B. DAS C. SSD D. HDD 12. Which of the following claims about NAS and SAN is correct? A. A SAN shares the network with TCP/IP traffic. B. NAS uses network sharing protocols such as CIFS. C. A SAN uses network sharing protocols such as CIFS. D. NAS does not share the network with TCP/IP traffic. 13. Regulations require that certain types of papers be kept indefinitely. To aid retrieval, metadata must be added to each stored item. Which of the following terms best characterises this storage system? A. Cloud backup B. Storage tiers C. Content addressed storage D. Storage area network

© Secbay Press

342 | Page

Secbay Press 14. You work as the storage administrator for a huge corporation. Granular security must be implemented to isolate shared storage between servers on the SAN. What should you set up? A. Zoning B. LUN masking C. VLAN D. File system ACLs 15. Where is storage zoning set up? A. Storage controller B. NIC C. TPM D. FC switch 16. Your virtual machine has 2TB of storage available. You’ve generated three 1TB disc volumes in the VM. In this configuration, which option was used? A. Deduplication B. Thin provisioning C. Thick provisioning D. Compression 17. Which of the following terms is used to describe a unique storage identifier? A. Port number B. iSCSI target C. MAC address D. WWN 18. Which SAN configuration connects Fibre Channel switch ports to create a virtual fabric? A. Multipathing B. LUN masking C. VSAN D. Zoning 19. You want to be able to move virtual machines from one hypervisor to another with minimal downtime. What are the prerequisites for this? A. Shared storage B. Virtual machine guest tools C. Dedicated VLAN D. IPv6 20. What is the typical component that sits between an HBA and a storage array? A. TPM B. Multipath device C. FCoE D. FC switch

© Secbay Press

343 | Page

Secbay Press 21. What advantages can be gained from the use of VLANs? Choose two. A. Reduced costs B. Security C. Reduced network complexity D. Performance 22. Which advantage does supernetting provide? A. Reduced routing table size B. Increased amount of available addresses C. Increased router performance D. Increased router security 23. Which of the following are advantages of utilising PAT? Choose two. A. Increased network performance B. Less required public IP addresses C. Less storage space consumption D. Increased security 24. Which of the following terms defines the process of distributing incoming application requests across various back-end servers? A. Compressing B. proxying C. Load balancing D. Forward proxying 25. What is raised when network compression to the cloud reduces the size of transferred data? A. Processor utilization B. Storage space consumption C. Security D. Monthly cloud fees 26. What is the normal SMTP mail transport port? A. 123 B. 80 C. 110 D. 25 27. What protocol is used to protect SFTP sessions? A. IPSec B. FTP C. SSH D. PKI 28. Which memory management strategy takes unused memory from virtual machines and assigns it to other virtual machines that require it? A. Overcommitting B. Sharding C. Ballooning D. Compressing

© Secbay Press

344 | Page

Secbay Press 29. Which of the following NIC features allows for a bigger individual Ethernet packet size? A. NIC teaming B. Checksum off-load C. Jumbo frames D. TCP segmentation off-load 30. What does an IPS do that an IDS does not? A. Block suspicious activity B. Check for known attack signatures C. Report abnormal activity D. Notify administrators of abnormal activity 31. Which of the following hypervisors is open source? Choose two. A. Xen B. KVM C. VMware ESXi D. Xen Server

32. You are configuring a hypervisor clustered system with three hypervisors that use shared storage: Hv1, Hv2, and Hv3. Hv1 hosts three virtual machines, Hv2 hosts two, and Hv3 hosts four virtual machines, all of which have network connectivity. What considerations must you take into account if the cluster will continue to run with two failed nodes? A. Virtual machine IP addressing B. Storage space C. Shared storage D. All virtual machines running on one host 33. Which type of Microsoft Hyper-V disc uses the least amount of disc space? A. Fixed B. Compressed C. Differencing D. Dynamic 34. What is the highest amount of RAM that Citrix XenServer 6.1 can support? A. 96GB B. 192GB C. 64GB D. 128GB

© Secbay Press

345 | Page

Secbay Press 35. Which setup enables for the testing of several identical copies of a custom application at the same time? A. Metered usage B. Resource pooling C. Elasticity D. Network isolation 36. Virtualization is used in your on-premises private cloud. The most recent version of the existing hypervisor will be available on newer, more powerful hardware. Which migration method should you employ to ensure that current virtual machines continue to run on the new hypervisor? A. V2P B. P2P C. P2V D. V2V 37. You migrated an on-premises server named appserv1.acme.net to the public cloud using P2V. appserv1.acme.net is up and running on a cloud virtual network with other reachable virtual machines. When you try to connect to appserv1.acme.net after the migration, you get an error saying the server is unavailable. What is the most likely source of the issue? A. The cloud virtual network is not reachable from the Internet. B. A DNS entry for appserv1.acme.net must be configured. C. appserv1.acme.net is using IPv6. D. Cloud virtual machines cannot use FQDNs. 38. Which of the following advantages results from using public cloud computing? Select two. A. Resiliency to network failure B. Delegated permissions for resource provisioning C. Tracking usage of IT resources D. Decreased cost over time 39. What are the typical advantages of virtualization? Select two. A. Reduced power consumption B. Reduced software licensing costs C. Increased performance D. Reduced rack space usage 40. Which factor determines how long it takes to move a P2V from on-premises to the cloud? A. Virtual switch configuration B. Network bandwidth C. Number of virtual NICs D. Number of virtual disks

© Secbay Press

346 | Page

Secbay Press 41. The storage of a busy virtual machine must be relocated to a faster storage array. Which method should you use? A. Storage migration B. Storage cloning C. Virtual machine clone D. Virtual machine snapshot 42. Which of the following statements about virtual machine clones is true? A. Clones always share the same virtual disks as the source virtual machine. B. Changes made to the clone also affect the source virtual machine. C. Clones use the same MAC address as the source virtual machine. D. Changes made to the clone do not affect the source virtual machine. 43. Which cloud feature enables IT resources to adapt to changing IT workloads? A. Elasticity B. Broad access C. Resource pooling D. Metered usage 44. Which management and monitoring protocol employs traps also known as notifications? A. WBEM B. SNMP C. WMI D. IPMI 45. According to historical log data for your users, normal cloud storage usage grows by an average of 9% every year. However, consumption has climbed by 18% in the last year. What is the best way to characterise this shift? A. Trend analysis B. Deviation from baseline C. Resource pooling D. Metered usage 46. In your company's public cloud implementation, an existing on-premises application will no longer be utilised. A functional cloud equivalent will be used instead. Which stage of the application life cycle is this applicable to? A. Application retirement B. Application migration C. Application upgrade D. Application replacement

© Secbay Press

347 | Page

Secbay Press 47. You are in charge of a team of 12 cloud consultants, all of whom have cellphones that do not have a mobile data subscription. The consultants travel to customer locations and do not have Wi-Fi connection on their devices at the moment. You must ensure that your cloud consultants are notified when critical virtual machines fail, regardless of their location. Which type of notification should you set up? A. SMS B. HTTP C. SNMP D. SMTP 48. At what point in the application life cycle is software performance optimised? A. Deployment B. Testing C. Maintenance D. Specifications 49. What else do you call the application life cycle? A. MOF B. SDLC C. ITIL D. OSI 50. What are some examples of application life cycle management models? Select two. A. ITIL B. OSI C. WBEM D. MOF 51. SIEM generally pertains to which category? A. Licensing B. Security C. Scripting D. Patching 52. Which of the following factors is most likely to influence a data retention policy? A. Cost of cloud storage B. Regulations C. Amount of available storage space D. Technical expertise of cloud administrators

© Secbay Press

348 | Page

Secbay Press 53. Your organization's administrators claim that some cloud virtual machine log files are being erased to save up storage space. You are in charge of cloud virtual machine maintenance. What do you think you should do? A. Enable centralized log forwarding. B. Forward logs to an on-premises server. C. Establish a log retention policy. D. Add disk space. 54. You must decide whether future cloud network performance numbers are acceptable. What should you start with? A. Take virtual machine snapshots. B. Establish a performance baseline. C. Harden each virtual machine. D. Enable syslog forwarding.

55. You're using IPMI to remotely monitor a host whose operating system has crashed and rendered it inoperable. What kinds of products would you be keeping an eye on? A. Halted processes B. Installed software C. Running processes D. BIOS settings 56. Which management and monitoring protocol employs traps, also known as notifications? A. IPMI B. WMI C. SNMP D. WBEM 57. Which capacity setting guarantees a specific quantity of resources for a hypervisor? A. Quota B. Metered C. Dynamic D. Reservations 58. Which Microsoft programme allows you to manage hypervisors and virtual machines? A. MSCM B. SCCM C. MSVMM D. SCVMM

© Secbay Press

349 | Page

Secbay Press 59. To recover memory within a virtual machine guest, which hypervisor component talks with the hypervisor? A. USB controller B. Balloon driver C. Guest tools D. Dynamic memory 60. Which configuration assures that SAN connectivity is maintained in the case of an HBA failure? A. Load balancing B. Multipathing C. Zoning D. Clustering 61. When a packet passes via a router, whose IP header field value is decremented? A. Destination IP address B. TTL C. Source IP address D. Version 62. You must guarantee that VoIP traffic is prioritised above other types of traffic on the network. What should you set up? A. QoS B. TTL C. SLA D. Multipathing 63. Which of the following terms depicts child artefacts left behind by software that did not adequately clean up after itself? A. Paged blocks B. Buffer overrun C. Data remnants D. Orphaned resources 64. Which PowerShell command gets log data from a remote computer named Charlee1? A. Retrieve-EventLog –ComputerName Charlee1 B. Get-EventLog –ComputerName Charlee1 C. Retrieve-Log –ComputerName Charlee1 D. Get-Log –ComputerName Charlee1

© Secbay Press

350 | Page

Secbay Press 65. Six virtual machines used for product development must be archived. The entire amount of disc space consumed is 3TB. What should you do to reduce the amount of archived disc space used? A. Ensure each virtual machine has only one vNIC. B. Compress the virtual machines. C. Ensure each virtual machine has only one vCPU. D. Store the archive on SSDs.

66. When conducting back-end database queries, a web application performs slowly. Which of the following configurations is most likely to enhance query performance? A. I/O throttling B. RAID 1 C. Optimized database read settings D. Tweaking write settings 67. Peter has set up a private cloud's separated network infrastructure. He discovers severe network latency for one specific hypervisor after launching many hypervisors on the network for the first time. What is the most likely source of the issue? A. Incorrect subnet mask B. Speed or duplex mismatch C. Missing default gateway D. Invalid IP addressA 68. Which of the following terms is closely connected to chargeback? A. Metered billing B. Showback C. Capital billing D. Reverse cloud billing 69. You are evaluating your public cloud provider's rates. Cloud storage provisioning has increased by 50% in the last month compared to the prior month. Which of the following terms best describes this situation? A. Elasticity B. Broad access C. Scaling out D. Scaling up 70. What should a development team do after fixing issues in a web application? A. Document the solution. B. Create a virtual machine snapshot. C. Deploy patches. D. Back up the virtual machine.

© Secbay Press

351 | Page

Secbay Press 71. Users have reported concerns with the stability of a custom web application installed in a cloud virtual machine. In a sandboxed environment, your team must address and test these risks. What should you do next with the virtual machine? A. Apply patches. B. Take a snapshot C. Back it up. D. Clone it. 72. Which sort of patch fixes a single minor problem? A. Hotfix B. Rolling update C. Service pack D. Update dependency 73. You've taken a custom web application and put it to the cloud for testing. The IT security team intends to replicate hostile user attacks on the application. What kind of testing is it? A. Penetration B. Load C. Vulnerability D. Stress 74. An existing cloud virtual machine lacks the computing capacity required to adequately serve an application. What do you think you should do? A. Resize the virtual machine. B. Redeploy the virtual machine. C. Scale out. D. Scale in. 75. What is needed for code rollback to function? A. Virtual machine snapshot B. Code backup C. Disk image D. Deployment slot 76. You are updating each node in the hypervisor cluster one at a time. You have transferred virtual machines from Hypervisor1 to Hypervisor2. What steps should you take to ensure that the virtual machines are placed on the original cluster node? A. Failover B. Scale in C. Storage migration D. Failback

© Secbay Press

352 | Page

Secbay Press 77. You intend to employ user authentication to connect your on-premises network to the public cloud. User IDs can only exist once. What should you set up? A. Federation B. MAC C. SSO D. Multifactor authentication 78. Which of the following phrases is associated with mandatory access control (MAC)? Select two. A. Data owner B. Resource labeling C. User groups D. Data classification 79. Which cloud service architecture allows you the most flexibility when it comes to allocating cloud resource permissions? A. SECaaS B. IaaS C. PaaS D. SaaS 80. Linux cloud-based virtual machines must be hardened. What do you think you should do? Select two. A. Deactivate default accounts. B. Change the hostname. C. Disable unnecessary services. D. Change the IP address. 81. Your organization's e-mail administrator is you. At the moment, a cloud-based e-mail system is being employed. Users must be able to encrypt e-mail messages between themselves. What do you think you should do? A. Make user private keys available to everybody. B. Make user private keys available for encryption. C. Make user public keys available to everybody. D. Make user public keys available for decryption. 82. Lena, a user, says that when he travels for work, his smartphone no longer authenticates to the company VPN. Each smartphone receives a unique PKI certificate from the company for VPN connection. What is the most likely source of Lena’s issue? A. The PKI certificate configuration is incorrect. B. The certificate is corrupt. C. The certificate has expired. D. The SIM card configuration is incorrect.

© Secbay Press

353 | Page

Secbay Press 83. Your organisation intends to use multifactor authentication on workplace desktop computers. Which of the following is an option? A. E-mail address, password, father's middle name B. E-mail address, password1, password2 C. Username, password D. Smartcard, PIN 84. Which of the following factors is most likely to influence whether fast or slow cloud storage is employed for file storage? A. Data classification B. File owner C. Permissions D. File size 85. Which vulnerability testing process includes port scanning? A. Validation B. Assessment C. Intelligence gathering D. Penetration testing 86. Which sort of redundancy permits nodes to be located on opposite sides of a WAN? A. Geo-clustering B. Long-haul clustering C. WAN clustering D. Disparate clustering 87. What advantages are gained by utilising a content delivery network? Select two. A. Dynamic website content B. Data availability C. Enhanced security D. Increased performance 88. Big data analytics are used to generate insights in a custom application. You must decrease computational and network latency due to the enormous volume of data exchanged between application components. Which of the following possibilities should you consider? A. Clustering, edge sites B. Distributed processing, jumbo frames C. Scaling in, edge sites D. Jumbo frames, NIC teaming

© Secbay Press

354 | Page

Secbay Press 89. The virtual hard discs of a virtual machine file server are corrupt and cannot be fixed. Which form of tape backup will take the shortest time to restore? A. Snapshot B. Full C. Incremental D. Differential 90. Which of the following initiates the business continuity plan guidelines? A. Business continuity management B. Disaster recovery plan C. Business impact analysis D. Incident response plan 91. Users report that they are unable to search a database server. You connect to the database host and grant all database tables to the user. Which stage in the troubleshooting process did you overlook? A. Document the solution. B. Verify system functionality. C. Escalate. D. Identify the problem. 92. You must be able to validate that tested applications run well under regular load levels. How are you able to accomplish this? A. Compare test results to the baseline. B. Compare current logs to previous logs. C. Compare test results to logs. D. Compare current test results to previous test results. 93. You are troubleshooting an app installation on a user's smartphone. After putting the initial solution theory to the test, you notice the problem is still unsolved. What options do you have for your next step? Select two. A. Document the solution. B. Establish another theory. C. Escalate. D. Verify system functionality. 94. Your automation system currently comprises of one runbook server. You must ensure that a vital weekly runbook is completed on time. What do you think you should do? A. Enable the runbook to run on multiple runbook servers. B. Create a second runbook that performs the same tasks. C. Enable a runbook load balancer. D. Take a snapshot of the runbook.

© Secbay Press

355 | Page

Secbay Press 95. You must examine Help Desk users' cloud MySQL database searches for a certain table. While completing the audit, security and performance levels must be maintained. What do you think you should do? A. Enable table-specific read auditing for the Help Desk group. B. Enable read auditing for all users. C. Enable table-specific write auditing for the Help Desk group. D. Enable write auditing for all users. 96. You intend to utilise Microsoft Visual Studio as a developer to control cloud resources using cloud APIs. When you start testing cloud connectivity in Visual Studio, you discover that cloud options are not supported. What do you think you should do? A. Create custom APIs B. Write shell scripts instead. C. Install a newer version of Visual Studio. D. Write PowerShell scripts instead. 97. The installation of mobile device apps is prohibited by your centralised cloud policy configuration. Some smartphones are incapable of implementing policy settings. Which of the following terms best describes the situation? A. Change management failure B. RBAC failure C. MFA failure D. Security control failure 98. When you instal a new virtual machine in the cloud, you find that, while it has a public IP address, it does not have a private IP address. What is the most likely source of the issue? A. Incorrect RBAC permissions were assigned to the virtual machine. B. The virtual machine has run out of disk space. C. The DHCP configuration has run out of IP addresses. D. The cloud DHCP provider is unavailable. 99. Which of the following is most likely to lead to troublesome runbook automation? A. Incorrect VLAN association B. Incorrect RBAC permissions C. Lack of a default gateway D. Lack of language support 100. You are unable to establish an SSH connection to a newly installed Linux virtual machine in the cloud called linux1.fakedomain.net. What should you double-check? Select two. A. ping resolution of linux1.fakedomain.net to the correct IP address B. linux1.fakedomain.net is pingable by IP address C. Firewall ACLs D. nslookup resolution of linux1.fakedomain.net to the correct IP address

© Secbay Press

356 | Page

Secbay Press

Answers to Full Length Practice Questons 1.

Answer: B Explanation: Infrastructure as a Service (IaaS) includes storage (IaaS).

2.

Answer: C Explanation: Orchestration software reduces errors and improves productivity by automating repetitive activities.

3.

Answer: A Explanation: Because the underlying difficulties (networking, virtual machines, database installation, and so on) have already been taken care of, software development takes less time, resulting in a faster time to market.

4.

Answer: A Explanation: Cloud services like as storage, virtual computers, and network configurations are referred to as Infrastructure as a Service (IaaS). Replication is classified as IaaS because it applies to multiple types of storage.

5.

Answer: A Explanation: Having many copies (replicas) of data, whether single files, databases, or other types of data, ensures high availability. if one copy is unavailable, the others will be as well.

6.

Answer: B Explanation: The file encryption key is decrypted using the user private key, which is then used to decrypt the file.

7.

Answer: D Explanation: Only half of the original total disc capacity can be used because all data on mirrored disc partitions is copied on the second disc.

8.

Answer: A Explanation: The time it takes for a hard disc platter to rotate to the correct spot to read data is known as rotational latency.

9.

Answer: B Explanation: SSDs are more expensive per gigabyte (GB) than traditional hard disc drives.

10. Answer: B Explanation: HSM (hierarchical storage management) solutions use policies to divide storage into tiers. Tier 1 (fastest) is used to store frequently accessed data, whereas tier 4 (slowest) is better for long-term storage of infrequently accessed data. 11. Answer: C Explanation: SSD storage drives do not contain moving parts like traditional hard disc drives, resulting in faster access times, less power usage and heat, and less noise. 12. Answer: B Explanation: Network attached storage (NAS) is network storage that may be accessed by hosts via file sharing protocols such as CIFS. 13. Answer: C Explanation: Content addressed storage (CAS) is intended for archiving and employs IDs rather than filenames to identify stored things. Metadata can aid in the search for an object. 14. Answer: B Explanation: LUN masking allows for more granular control over shared storage across storage users across the network.

© Secbay Press

357 | Page

Secbay Press 15. Answer: D Explanation: Zoning, which is configured at the Fibre Channel (FC) switch, is used to segregate network storage on a SAN across hosts. 16. Answer: B Explanation: Thin provisioning enables administrators to overcommit physical storage space by generating numerous disc volumes whose combined size exceeds available space. 17. Answer: D Explanation: The World Wide Name (WWN) is a one-of-a-kind identifier for SAN storage. 18. Answer: C Explanation: Virtual storage area networks (VSANs) construct an isolated virtual storage fabric in the same way that VLANs isolate traffic from other networks. 19. Answer: A Explanation: Virtual machine files can be stored on shared storage, allowing for zero downtime while moving VMs between servers. For VM integrity, a temporary VM memory contents file is also written to shared storage. 20. Answer: D Explanation: Fibre Channel (FC) switches connect storage to host bus adapters (HBAs). 21. Answer: B, D Explanation: Virtual local area networks (VLANs) logically break a physical network into smaller networks for network isolation, reducing network traffic and increasing security. 22. Answer: A Explanation: Supernetting enables several nearby IPv4 networks to be specified in a single routing table entry, lowering routing table size. 23. Answer: B, D Explanation: Port address translation (PAT) improves security by masking the internal IP addresses of requesting clients and restricting inbound Internet connections. PAT also permits many internal IP addresses to connect to the Internet using a single public IP address. 24. Answer: C Explanation: Load balancing distributes network service requests to many back-end hosts, with the least busy host responding. 25. Answer: A Explanation: Compressing and decompressing data demands more CPU processing power than not compressing at all. 26. Answer: D Explanation: By default, Simple Mail Transfer Protocol (SMTP) connects on TCP port 25. 27. Answer: C Explanation: To secure FTP sessions, Secure Shell (SSH) is utilised in conjunction with SFTP. 28. Answer: C Explanation: When a hypervisor receives an urgent request from a virtual machine for memory that is not available, memory ballooning happens. Other virtual machine guests use the ballooning driver to temporarily lend unused memory to the hypervisor. 29. Answer: C Explanation: Jumbo frames allow Ethernet frames to include up to 9,000 bytes rather than the standard capacity of 1,514 bytes.

© Secbay Press

358 | Page

Secbay Press 30. Answer: A Explanation: Intrusion prevention systems (IPSs) can not only identify and report unusual behaviour, but they can also take action to prevent it. 31. Answer: A, B Explanation: Both Xen and KVM are open source hypervisors. Open source typically indicates free or minimal cost, as well as the opportunity to see and edit the source code. 32. Answer: D Explanation: If two nodes die, all virtual machines must be able to fail over to the last remaining node, which must have the hardware to support all virtual machines operating concurrently. 33. Answer: C Explanation: Differencing discs work on a parent-child model, with a parent virtual disc holding files inherited by its offspring, and modifications being written to the differencing disc. 34. Answer: D Explanation: Citrix XenServer 6.1 can support up to 128GB of RAM. 35. Answer: D Explanation: The ease and speed with which isolated computing environments may be set up for testing purposes is one advantage of cloud computing. 36. Answer: D Explanation: V2V server migrations are frequently used to move virtual machines to a new or upgraded hypervisor on another host. 37. Answer: B Explanation: Most likely, appserv1.acme.net does not have a DNS A record. The IP address should have been verified as well. 38. Answer: B, C Explanation: Cloud administration permissions can be distributed, most commonly via groups or roles. Metered use of IT resources is a useful cloud feature. 39. Answer: A, D Explanation: Server virtualization minimises the amount of physical server boxes, resulting in reduced physical server rack space and power consumption. 40. Answer: B Explanation: The length of time it takes to migrate a physical on-premises server to the cloud is determined by network bandwidth and current network utilisation. 41. Answer: A Explanation: Storage migration relocates the virtual discs, configuration information, and snapshots of a virtual machine to a new location. 42. Answer: B Explanation: Virtual machine clones are copies of existing virtual machines, but modifications to the replica are not sent to the original. 43. Answer: A Explanation: Cloud elasticity enables quick resource provisioning to support dynamically changing IT workloads. Auto-scaling is an example of this, in which an additional virtual machine is supplied when the workload of an application exceeds defined thresholds. 44. Answer: B Explanation: The Simple Network Management Protocol (SNMP) is a network device management and monitoring protocol that sends messages to a centralised SNMP management dashboard via traps or alerts. © Secbay Press

359 | Page

Secbay Press

45. Answer: B Explanation: Baseline deviations are deviations from the norm. 46. Answer: D Explanation: The on-premises application is being replaced by a functional equivalent. 47. Answer: A Explanation: Short Message Service (SMS) is text messaging that can be used without an Internet connection. 48. Answer: C Explanation: Improving software performance happens during the maintenance phase. 49. Answer: B Explanation: The terms software development life cycle (SDLC) and application life cycle (ALC) are interchangeable. 50. Answer: A,D Explanation: Application life cycle management methods include the Information Technology Infrastructure Library (ITIL) and the Microsoft Operations Framework (MOF). 51. Answer: B Explanation: Security Information and Event Management (SIEM) solutions examine device activity in real time, including logs, and compare it to correlation rules to discover security concerns. 52. Answer: B Explanation: Certain industries’ regulations can have an impact on data retention rules. 53. Answer: C Explanation: Log retention policies govern how log entries in log files are stored, where they are stored, and how long they are stored. Sometimes laws and regulations can have an impact on log retention policy. 54. Answer: B Explanation: Normal usage activity is determined by performance baselines, which are then compared to future performance metrics to find anomalies. 55. Answer: D Explanation: Intelligent Platform Management Interface (IPMI) does not require a working operating system for remote management. instead, it uses device hardware to manage and monitor BIOS settings. 56. Answer: C Explanation: The Simple Network Management Protocol (SNMP) is a network device management and monitoring protocol that sends messages to a centralised SNMP management dashboard via traps or alerts. 57. Answer: D Explanation: Reservations keep virtual machines from using up all of the hardware resources. 58. Answer: D Explanation: System Center Virtual Machine Manager (SCVMM) is a virtual machine and hypervisor management tool. 59. Answer: B Explanation: Memory ballooning via the balloon driver enables virtual machines that do not require all of their allotted memory to share memory pages with the hypervisor.

© Secbay Press

360 | Page

Secbay Press 60. Answer: B Explanation: Multipathing refers to the provision of several communication channels between a storage consumer and SAN components. 61. Answer: B Explanation: As a packet traverses a router, the time-to-live (TTL) IP header field value is decremented. 62. Answer: A Explanation: Quality of service (QoS) is a network configuration that prioritises different types of network traffic. 63. Answer: D Explanation: Orphaned resources is a catch-all phrase for residual software assets that are no longer linked to a parent configuration. 64. Answer: B Explanation: The correct PowerShell cmdlet is Get-Eventlog. 65. Answer: B Explanation: Disk compression conserves disk space. 66. Answer: C Explanation: Because querying sends read requests to a database, improving read settings can assist increase query performance. 67. Answer: B Explanation: Mismatched speed or duplex settings between NICs and switches can lead to network performance degradation. 68. Answer: B Explanation: Chargeback is similar to showback. Chargeback refers to tracking consumption and invoicing based on that usage, typically to a department inside an organisation, whereas showback tracks usage for reasons other than billing. 69. Answer: A Explanation: Elasticity refers to the ease with which IT resources such as cloud storage can be provisioned and deprovisioned. 70. Answer: A Explanation: When problems are solved, documentation is required promptly. 71. Answer: D Explanation: Cloning a virtual machine is a convenient approach to create an exact clone for testing without disrupting the production environment. 72. Answer: A Explanation: A hotfix is a patch that solves a single problem. 73. Answer: A Explanation: Penetration testing actively exploits detected host and network vulnerabilities in order to test the system's resistance to attacks. 74. Answer: A Explanation: Resizing refers to changing the power of the underlying virtual machine, such as adding more vCPUs, faster and more storage, more RAM, and so on. 75. Answer: B Explanation: Software code must be backed up in order to rollback, or return to a previous code version. 76. Answer: D Explanation: A failback restores the original host's failover clustered service. © Secbay Press

361 | Page

Secbay Press

77. Answer: A Explanation: Across organisational borders, federation employs a single identity supplier. 78. Answer: B,D Explanation: Mandatory access control (MAC) labels resources, which is also known as data classification. After that, security clearances are assigned, and the operating system assesses whether or not access is given. 79. Answer: B Explanation: The cloud consumer is responsible for installation, configuration, and maintenance of Infrastructure as a Service (IaaS). 80. Answer: A,C Explanation: Reducing the attack surface (hardening) entails deleting unnecessary components and modifying default settings. 81. Answer: C Explanation: Public keys are safe to distribute to everybody. 82. Answer: C Explanation: A PKI certificate that has expired can no longer be utilised for security purposes. 83. Answer: D Explanation: Multifactor authentication consists of a smartcard (something you own) and a PIN (something you know). 84. Answer: A Explanation: Data classification includes metadata such as sensitivity level, department, region, project, and so on to the file system. This metadata can be used to identify frequently accessed or sensitive material that should be stored in a specific way or on a specific type of storage. 85. Answer: B Explanation: Activities such as port scanning are included in the assessment phase of vulnerability scanning. 86. Answer: A Explanation: Geoclustering ensures excellent availability over long distances. 87. Answer: B,D Explanation: Content delivery networks (CDNs) position content such that it is geographically close to end consumers, increasing performance and data availability. 88. Answer: B Explanation: To address huge, difficult issues, distributed processing employs numerous computers working together. Jumbo frames are larger than standard network frames, allowing them to hold more data while lowering the number of frames that must be processed. To fully support jumbo frames, network devices such as routers must be configured with an adequate maximum transmission unit (MTU). 89. Answer: B Explanation: To recover a full backup, use a single backup set, or volume, rather than switching out multiple backup tapes. 90. Answer: A Explanation: Business continuity management (BCM) implements business continuity plan principles.

© Secbay Press

362 | Page

Secbay Press 91. Answer: D Explanation: Before any improvements were made, the precise problem was not even discovered. 92. Answer: A Explanation: A baseline can only be used to assess what is normal over time. 93. Answer: B,C Explanation: When the first solution theory fails, either escalate the problem to a different department or come up with other solution theories to tackle the problem. 94. Answer: A Explanation: Using numerous runbook nodes can increase runbook performance and availability. 95. Answer: A Explanation: Audit no more resources or users than are absolutely necessary. Otherwise, both performance and disc space utilisation suffer. 96. Answer: C Explanation: Sometimes tools must be upgraded to handle the most recent cloud features. 97. Answer: A Explanation: Change management failures are defined as centralised policy sets that cannot be implemented. 98. Answer: C Explanation: TCP/IP settings are provided to devices using the Dynamic Host Configuration Protocol (DHCP). This is frequently a service offered by cloud networks. Depending on the addressing, the number of possible IP addresses could soon reach a limit. 99. Answer: B Explanation: Cloud resources can have role-based access control (RBAC) permissions applied to them. if done incorrectly, this can hinder appropriate runbook execution. 100. Answer: C,D Explanation: The name must resolve to the right IP address, and TCP port 22 traffic must be allowed by firewall ACLs.

© Secbay Press

363 | Page

Secbay Press

Glossary Android Rooting: Rooting is a Linux word syntax term. This indicates that the user has the special privilege of using a mobile phone. Users may take control of their phone's Settings, Features and Performance. Application Framework: It offers numerous key classes for the creation of an Android app. It offers a general summary of hardware access and helps to manage application resources in the user interface. Android architecture: It comprises many components to satisfy all the requirements of Android devices. An open-source Linux kernel with a set of C/C++ library Android Device Hacking: As these gadgets serve a wide range of functions, the number of individuals using smartphones and tablets is rising quickly. It is a platform that is accessible to all apps. Android is the most popular mobile OS. Like other OSSs Android OS: The Android operating system is a Google mobile operating system. Its design allows users to intuitively use mobile devices, Google uses Android software also with a distinct user experience for TVs, automobiles and wristwatches. Android OS Architecture: Android architecture comprises many components to satisfy all the requirements of Android devices. An open-source Linux kernel with a set of C/C++ library. Active Network Threats: An active attack is one in which an attempt is made to make unauthorized changes to the system. Modifications to transmitted or stored data, as well as the formation of new data. Application-level filtering: Email filters and web proxies are examples of application level filters. These act as proxies for one or more services. Agencies in third-party: They are the responsible for selecting a cloud service, Leveraging the FedRAMP Process and Requiring CSPs to meet FedRAMP requirements. Access Control (AC): The selective restriction of access to a place or other resource is known as access control (AC), while the process is known as access management. Consuming, entering, or using are all words that can be used to describe the process of accessing. Authorization is the process of gaining access to a resource. Asset Inventory: It is critical to understand where information is stored in order to protect it. The physical and logical components must be included in the asset inventory. For each piece, it should describe the location, business processes Asset management: The Asset Management System (AMS) is a collection of tools and procedures for establishing asset management policies and objectives. Application Scanner: Applications vulnerability scanners test websites in order to detect known software vulnerabilities and erroneous configurations in network or web applications. Acunetix: It's a comprehensive web application security testing solution that can be used on its own © Secbay Press

364 | Page

Secbay Press or in conjunction with other tools in a complicated context. Aircrack-ng: Aircrack-ng is a complete suite of tools to assess Wi-Fi network security. Attacking in Aircrack-ng: Replay attacks, de-authentication, fake access points and others via packet injection Access Point (AP): A wireless access point (AP) is a network device that sends and receives data via a wireless local area network (WLAN). The WLAN and a fixed wire network are connected through the wireless access point. Angry IP Scanner: The program may also provide access to the shared resources discovered during the scan through HTTP, HTTPS, FTP, or shared folders. IP Range, Random or file in any format Exports results into many formats. Antivirus: Software protects systems from internal threats by detecting and detecting dangerous files and viruses. ARP Man-in-the-Middle Attacks: An attacker can reroute network data and, in some situations, introduce malicious code in a man-in-the middle assault. Normally, communication from the user to the Internet passes through the switch directly to the router. ARP DDOS Attacks: In a DoS attack, an attacker can also utilize ARP poisoning. An attacker could transmit an ARP response with a false MAC address for the default gateway. ARP Request: The Address Resolution Protocol (ARP) is a communication protocol for determining the link layer address associated with a given internet layer address, such as a MAC address. Annual Loss Expectancy: Annual loss expectancy is a computation that helps you predict the expected monetary loss for an asset due to a specific risk over a single year. You can compute ALE as part of your company's quantitative cost-benefit analysis for any given investment or project idea. Annualized Rate of Occurrence (ARO): The annualized rate of occurrence (ARO) is defined as the probability of a threat occurring in a given year. Asset Value (AV): Asset valuation simply pertains to the process to determine the value of a specific property that is conducted usually when a company or asset is to be sold, insured, for taken over. Asset Management: Asset tagging is the process of placing physical identification numbers of some sort on all assets. This can be a simple as a small label that identifies the asset and the owner

Avoidance: Although often not possible, this is the easiest way of removing risk from a project. It involves the removal of the tasks that contain the risk from the project. Sometimes you can remove a small part of a project which carries a large risk factor. Acceptance: Acceptance involves planning the risk into the project. If a better response strategy cannot be identified, accepting the risk might be sufficient to proceed with the project. Air Gap: This means the device has no network connections and all access to the system must be © Secbay Press

365 | Page

Secbay Press done manually by adding and removing items such as updates and patches with a flash drive or other external device. Attribute-Based Access control: Grants or denies user requests based on arbitrary attributes of the user and arbitrary attributes of the object, and environment conditions that may be globally recognized Control Authentication: The sender and receiver's identities have been verified. The destination/origin of information is also confirmed. Buffer Overflows: A buffer overflow, also known as a buffer overrun, is a frequent software programming error that an attacker might use to obtain access to your computer. Bootrom Exploit: A jailbreak bootrom may break all authentications at low levels such as file system, iBoot, and NOR (custom boot logos). Backdoors: Usually created by software developers for an emergency entry into a system Blue Keep: Blue Keep is a Microsoft RDP vulnerability that allows attackers to remotely log in to a victim's computer. Bash: Bash is a Unix shell and command language created by Brian Fox as a free software substitute for the Bourne shell for the GNU Project. Bourne shell: The Bourne shell was one of the major shells used in early versions and became a de facto standard Bashrc File: When a user signs in, the.bashrc file is executed as a script. The file itself contains a number of terminal session configurations. Coloring, completion, shell history, command aliases, and other features can all be set up or enabled in this way. Background Processes: Background and do not require user involvement. As an example, consider antivirus software. Buffer Overflow: When an application receives more or different input than it expected, a buffer overflow occurs. As a result of the error, system memory that would normally be protected and inaccessible is exposed. Cloud Carrier: A cloud carrier serves as an intermediary between cloud consumers and cloud providers, providing connectivity and transfer of cloud services. Commands and utilities: There are a variety of commands and utilities that you can utilize in your daily tasks. Commands and utilities include cp, mv, cat, and grep, etc,. Crunch: To crack a password, we must test a large number of passwords until we find the one that works. There is no guarantee that any of those millions of possibilities will work. Changing Permissions: To change the file or the directory permissions, you use the chmod (change mode) command. Each permission is assigned a value, as the following table shows, and the total of each set of permissions provides a number for that set.

© Secbay Press

366 | Page

Secbay Press Carrier-loaded Software: Pre-installed software or apps on devices may be vulnerable to criminal actions, such as deleting, altering, stealing of device data, wake-up calling, etc. Create phase: Data will continue to be generated in the cloud and by remote users. CSPs (Cloud Service Providers): Provide the actual cloud service to an Agency Must meet all FedRAMP requirements before they implement their services. CIO Council: Disseminates FedRAMP information to Federal CIOs and other representatives through cross agency communications and events Cloud Pentesting: Cloud Penetration Testing is a permitted simulated cyber-attack against a system that is hosted on a Cloud provider. Amazon’s AWS Concise: Content and complexity is relevant to the audience; No superfluous words or phrases Consistent: Terms have the same meaning throughout the document; Items are referred to by the same name or description throughout the document; the level of detail and presentation style is the same throughout the document CVE (Common Vulnerabilities and Exposures): The Common Vulnerabilities and Exposures (CVE) dictionary is a collection of common names) for publicly known information security flaws. CVE's standard IDs make it easier to communicate data between different network security databases and tools. CVSS (Common Vulnerability Scoring System): CVSS is an open framework for conveying IT vulnerability characteristics and implications. Its quantitative technique enables consistent, precise measurement. CWE (Common Weakness Enumeration): It is a method of identifying weaknesses in a group of people. CWE establishes a common vocabulary for discussing, identifying, and resolving. Conduct Root Cause Analyses: Spend more time searching for root causes of equipment failures instead of engaging in temporary fixes. C shell: The C shell was developed by Bill Joy for the Berkeley Software Distribution Its syntax is modelled after the C programming language. Controlling Jobs: Job control is a command shell capability that allows many commands to be run and managed from a single shell instance. A parent shell forks a child process to run a command without job control, sleeping until the child process terminates. Cybersecurity Risk Mitigation: Because the potential of a cyber-assault is almost certain, proactive cybersecurity risk mitigation is increasingly becoming the only option for enterprises. Cybersecurity Research: Cybersecurity Research is the area that is concerned with preparing solutions to deal with cyber criminals. With increasing amount of internet attacks, advanced persistent threats and phishing, Clickjacking: Users are tricked into clicking something other than what they think they're clicking by clickjacking. © Secbay Press

367 | Page

Secbay Press Customer-based SLA: Applies to all contracted services by a customer, a group of customers or the same business area. Control plane: This plane carries signaling traffic originating from or destined for a router. Centralized model: All desktop instances are stored in a single server, which requires significant processing power on the server. Containerization: Containerization is a technique in which the kernel allows for multiple isolated user space instances. The instances are known as containers, virtual private servers, or virtual environments. Certificate Management: Certificate management is the process of managing digital security certificates. Certificate authorities are responsible for certificate management and serve as a registration authority for subscriber certificates. Cryptography: Cryptography is the use of codes to secure information and communications. Only the individual to whom the information is directed can understand and process it. Confidentiality: Only the individual for whom the information is meant has access to it, and no one else has access to it. Digital Forensics: The process of preserving, identifying, extracting, and documenting computer evidence. It is the science of extracting information from digital media. Digital Evidence: Any type of data stored and gathered from any electronic storage device is referred to as digital evidence. Wireless networks and random-access memory can also be used to recover digital evidence. Documentation: This method necessitates the creation of a record of all visible data. It aids in the recreation and analysis of the crime scene. It entails photography, sketching, and crime-scene mapping, as well as adequate documenting of the crime scene. Database Forensics: Database forensic experts look into any database access and report any changes to the data. Database forensics can be used to investigate large-scale financial crimes and verify business contracts. Disk Forensics: The science of extracting forensic evidence from digital storage media such as hard disks, USB devices, CD, DVD, Flash drives, and floppy disks is known as disk forensics. Disk analysis: Analysts that specialize in disk forensics ensure that all pertinent data is retrieved, processed, and presented as evidence. Directories: Directories are used to organize both unique and common files. Destroy phase: Furthermore, we would need to delete data from the production process and then sanitize the media. Dr.Fone: Dr. Fone was the first company in the world to offer iOS and Android data recovery software, and it has saved a lot of people. With over a decade of expertise providing top-rated software and services to individuals, © Secbay Press

368 | Page

Secbay Press Denial-of-Service: Denial-of-Service (DoS) attacks prevent a company or organization from using its own resources. In a typical DoS attack, the culprit floods the target network with invalid authentication requests or pings. Database models: Describes relationships between data elements Used to represent the conceptual organization of data Formal methods of representing information Distributed Database models: Client-server type of DB located on more than one server distributed in several locations Synchronization accomplished via a two-phase commit or replication methods. Data dictionary: A Data Dictionary is a list of names, definitions, and properties for data elements in a database, information system, or research project. Data Marts: Often regional collection of databases. Data mining: Data mining is a technique for extracting and detecting patterns in huge data sets that combines machine learning, statistics, and database systems. Data Portability: The ability to transfer data from one system to another without being required to recreate or reenter data descriptions or to modify significantly the application being transported. Degrees Monitoring (DLP): Which involves looking at data as it exits the production environment. These are commonly referred to as DLPs. DNS Poisoning: Attackers utilize DNS servers, forward website users to another attacker's website DNS Poisoning. Database scanners: Database vulnerability scanners identify the weak points in a database so as to prevent malicious attacks Data Security on Wireless Networks: Identify a wireless network within close proximity is a wirelessenabled computer. There is no way to intentionally hide your networks. Data Encryption: WPA and WPA2 are both viable options, but longer and more complicated passwords give better security If You are putting your business or personal data at risk, encryption should be considered necessary. Disaster recovery strategy: A disaster recovery plan (DRP), also known as a disaster recovery implementation plan or an IT disaster recovery plan, is a documented policy and/or process that helps an organization execute recovery processes in the event of a disaster. Dash shell: The Debian Alquist Shell is the default shell script in Ubuntu. While bash is the default login and interactive shell, dash is used to run system processes because it's much lighter than bash. Data Caching: Caching in mobile devices used to interact with web apps, attackers attempt to exploit the data caches DNS Attacks: DNS is a service that converts host names to IP addresses. Users will no longer need to remember web site IP addresses as a result of this. Reverse lookups are also available through DNS.

© Secbay Press

369 | Page

Secbay Press Domain Hijacking: An attacker changes the registration of a domain name without the owner's permission in a domain hijacking assault. Social engineering tactics are frequently used by attackers to get illegal access to the domain owner's email account. Driver Manipulation: Drivers are used by operating systems to interact with hardware and software components. An application may need to support an older driver on occasion. DLL Injection: A DLL is a compiled set of code that may be used by a program without having to recreate it. Most programming languages, for example, include math-based DLLs. Dissemination: Intelligence and Security Informatics has more information. This refers to the active dissemination and dissemination of all types of information to the users or audiences that deserve it. Digital Certificate: A digital certificate is a file that contains a variety of data, including identifying information, a serial number, and expiration dates. It also contains the digital signature of the certificate authority that issued the certificate. Email Forensics: Analysts that specialize in email forensics extract valuable information from emails. Senders and receivers, the content of the messages, time stamps, sources, and metadata are all examples of this information. Evasi0n7: Evasi0n7 is an iPhone, iPod touch, iPad, and iPad mini devices running iOS 7.0 to 7.0.6 is suitable as a junk tool (Devices that have been updated Over the Air [OTA] should be restored with iTunes first). Evidence Collection: When it comes to tracing the attacker and comprehending the attack's procedure, evidence collecting is important. As a result, incident responders should know where to look for evidence and how to gather it. Escalated privileges: Attackers engage in escalation privilege assaults that take advantage of design flaws, programming mistakes, bugs, etc. Encryption: Encoding data to hide it from anyone who isn't authorized to see it. A decryption key can be used to access or decrypt encrypted data. Employers Background Checks: Background checks assist you in assembling a team you can rely on, while also limiting risk and safeguarding your company's brand. Exit Interview: An exit interview is a survey that is undertaken with an employee when he or she departs a company. . Forensics Data analysis: structured data is analyzed in this branch of forensics. The primary role of data analysts is to investigate financial crimes and fraud. File Editor: The nano command is use to edit text files. Command:-nano File Permission / Access Modes: File ownership is an important feature of Unix that allows for safe file storage. Owner permissions dictate what activities the file's owner is allowed to take on the file. Firewall: A firewall is a network security device that monitors incoming and outgoing network traffic. Also allows or disallows data packets according to a set of security rules. © Secbay Press

370 | Page

Secbay Press Filter Table: Filter is default table for iptables. So, if you don’t define you own table, you’ll be using filter table. Framing: Website combined with iFrame components in HTML in another webpage Drive by Downloading unintentional Internet download of software. This exploit affects Android Foreign Keys: Foreign keys are structured as a shared component in a database that connects two tables. A foreign key must always relate to another primary key. File: A file is a computer object that holds data, information, settings, or commands that are utilized with a software. FedRAMP Program Management Office: Established within the GSA and responsible for the development of the FedRAMP program. FIPS 199: FIPS 199 mandates that federal agencies evaluate their information systems in terms of confidentiality, integrity, and availability, assigning a low, moderate, or high impact acting to each system in each category. Foreground Processes: They run on the screen and require user input. For example Office Programs Facilitate communication: The service desk team or the service provider know the customer’s expectations. Federation: A federated identity is a portable identity that can be used across businesses and domains. Goals of Fedramp: Accelerate the adoption of secure cloud solutions through reuse of assessments and authorizations Increase confidence in security of cloud Governance: The governance of FedRAMP is comprised of different executive branch entities that work in a collaborative manner to develop, manage, and operate the program. General Services Administration (GSA): The GSA is in charge of collecting, storing, distributing, and disposing of personal property and supplies, as well as real estate acquisition, management, and disposal. Host Based IDS: HIDS tools monitor the log files generated by your applications, creating a historical record of activities and functions allowing you to quickly search them for anomalies and signs an intrusion may have occurred. Hydra Password Attacks: Hydra is a parallelized password cracker that can attack a variety of protocols. It's quick and adaptable, and adding new modules is simple. Researchers and security consultants can use this program to demonstrate how simple it is to acquire unauthorized remote access to a system. Hosted model: Desktops are maintained by a service provider. This model eliminates capital cost and is instead considered operational cost. Honeypot: A honeypot is a security technique that generates a virtual trap in order to entice intruders. A purposely infiltrated computer system allows attackers to exploit weaknesses, following you to © Secbay Press

371 | Page

Secbay Press research them and enhance your security measures. Hash Functions: A hash value with a fixed length is calculated based on the plain text, making it impossible to reconstruct the plain text's contents. Many operating systems encrypt passwords using hash algorithms. Identification: In the forensic process, it is the initial stage. What evidence is present, where it is held, and how it is stored are all part of the identification process. Personal computers, mobile phones, and personal digital assistants (PDAs) are examples of electronic storage medium. Incident Response: Incident response, also known as IT incident, computer incident, or security incident. It is a structured way to dealing with and managing the consequences of a security breach or cyberattack. Incident Response Team: Incident response team (CSIRT) assists in reducing the impact of security risks by responding quickly and effectively. Due to the increasing amount of security risks, companies must have a specialized incident response team to deal with them. Indicators of Compromise (IOC): These detect potentially malicious behavior on a system or network Information security and IT workers can use indicators of compromise. IP TABLES: Packet filtering and NAT rules are managed by the iptables firewall. IP Tables is included in every Linux distribution. Infecting the device: Mobile spyware infects the device on Android and iOS smartphones differently. Infection with iOS: iOS requires physical mobile access. A null-day operation like the Jailbreak ME hack can also infect the device. Installing a backdoor: For installing the backdoor, the rooting of Android devices and Apple jailbreaking’s requires admin access. Despite rooting/jamming methods by device makers, mobile spyware may simply overcome them. Improper SSL validation: Security Laps in applications the process of SSL validation can allow attackers to bypass data security iOS Jailbreaking: Jailbreaking controls the iOS system used on Apple devices, the symmetry of rooting of Android devices in simple language. It removes the device from Apple source dependencies and allows you to utilize non-existent third-party programs from the official app store. IRoot: Like Root Genius, iRoot has been built by Chinese folks for another strong root program. You can manage your Android phone or tablet only one click away. Ios Device Hacking: Mobile Apple's operating system iOS operates devices including the iPhone, iPad and iPod Touch. Originally called the iPhone OS, with the launch of the iPad, the moniker was changed to iOS. The iPad has had a different OS iBoot Exploit: A jailbreak iBoot permits access to the file system and level iboot. If a fresh boot-rom is in place, this sort of exploit can be semi-tethered. Inheritance: Objects inherit attributes and behaviors from super class © Secbay Press

372 | Page

Secbay Press Identity Management: Identity management is the method of associating user rights with a given identity to grant individuals access to system services. Initial: Development based on Ad Hoc effort. No procedures in place and there is no assurance of consistency; thereby affecting software quality. Interoperability: The capability to communicate, to execute programs, or to transfer data among various functional units under specified conditions IOT Devices: IoT is an Internet-connected gadget which can interact with other devices and networks. Depending on their design and functions, these gadgets may do a wide range of activities. inSSIDer: inSSIDer is comparable to the old Net Stumbler tool, except that it has been updated and works with Windows XP, Vista, and Windows 7. The program detects wireless networks and reports on their kind, maximum transfer rate, and channel utilization. IP addresses: The Internet Protocol (IP) is a set of standards and specifications for producing and transferring data packets, or datagrams, across networks. The Internet Protocol (IP) s a component of the Internet protocol suite's Internet layer. Impact of Incidents: The impact and immediacy of an incident are frequently used to evaluate its priority. The term 'Impact' refers to the scope of the Incident as well as the potential damage caused by the Incident before it can be remedied. Improve PM Processes: An effective PM program can drastically increase MTBF Identity management: Identity management ensures that you are who you say you are and stores information about you. An identity management database stores information about your identity. Integrate Overflow: An integer overflow attack tries to use or produce a numeric value that an application can't handle. As a result, the application produces incorrect results. Identify the hazards: The first step to creating your risk assessment plan is determining what hazards your employees and your business face Intranet: An intranet is a private network within a corporation that allows employees to safely communicate information. Integrity: Information cannot be updated in storage or during the transfer between the sender and the intended receiver without any modification to the information being detected. Jump Bag- Forensic Toolkit: Response times should be as short as feasible when an issue occurs. As a result of every minute that passes, a danger artifact is lost, or the attackers cause additional harm. Jamming: The goal of jamming (also known as network interference) is to interrupt the network. Interference is almost unavoidable due to the wireless characteristics. John the ripper: It is a fantastic program for breaking passwords. Well-known brute-force techniques such as dictionary and bespoke wordlist attacks. It can also be used to crack hashes or passwords for zipped or compressed data, as well as locked files. © Secbay Press

373 | Page

Secbay Press Job rotation: A policy that requires employees to rotate into various jobs, or at least rotate some of their responsibilities. It provides numerous advantages for both the person and the organization. Jump box: A jump box, or jump server, is a server that is used to access devices that have been placed in a secure network zone such as a DMZ. Lead Investigator: Collects and analyses all evidence Implements rapid system and service recovery. LINUX+: The UNIX operating system is a collection of applications that serve as a conduit between the user and the computer. Kernel is a set of computer programs that distribute system resources and coordinate all aspects of the computer's internals. ls: You can use the ls command to list out all the files or directories available in a directory. Command:ls Linux Kernel: The core of the Android architecture is Linux Kernel. It manages all drivers accessible, such as Display controls, Camera ports, etc. Leveraging and Authorization: Shows the architecture, including the domain architecture, with the existing domain hierarchy, names, and addressing scheme; server roles; and trust relationships. Malware infections: Malware can infiltrate your computer by exploiting known software flaws. A vulnerability is a flaw in your program that allows malware to get access to your computer. When you visit a website Mangle Table: Iptables’s Mangle table is for specialized packet alteration. This alters QOS bits in the TCP header. memdump: The memdump plugin is run against the memory image, with the output directed to the home folder Message manipulation: Changing a packet header address to route it to an unexpected destination or modifying user data are examples of message manipulation. Masquerade attacks: Masquerade attacks involve an entity adopting a false persona in order to get or manipulate information. As a result, gain unjustified privilege status. Manage patch application: A solid patch management policy will include regular tracking of patch releases for all software. Monitoring: Packet capture and export of data to text files for further processing by third party tools Mobility: Users of wireless networks can connect to existing networks and then roam freely. Because the phone connects the user via cell towers, a mobile phone user can travel kilometers in a single call. MAC address: A unique identification assigned to a network interface controller (NIC) for use as a network address in communications inside a network segment is known as a media access control address (MAC address). Mean Time between Failures (MTBF): Mean time between failures (MTBF) is a prediction of the time © Secbay Press

374 | Page

Secbay Press between the innate failures of a piece of machinery during normal operating hours. MTBF helps businesses understand the availability of their equipment. Mean Time to Repair (MTTR): Mean time to repair (MTTR) is a maintenance metric that calculates the average amount of time required to troubleshoot and repair faulty equipment. It reflects how rapidly an organization can respond to and repair unplanned breakdowns. MTTR formula: (Total corrective maintenance time/ Number of repairs) =MTTR Multi- factor authentication: When you use multi-factor authentication, your IAM provider requires more than one type of proof that you are who you say you are. A common example is the requirement of both a password and a fingerprint. Man-in-the-Browser: It is a form of proxy Trojan horse that infects web browsers that are vulnerable. Browser session data can be captured by successful man-in-the-browser attacks. Memory Leak: A memory leak is a flaw in a computer program that leads it to consume more and more memory as time goes on. The application can take so much memory that the operating system fails in the worst-case scenario. Malvertising: The act of incorporating malicious ads on trusted websites, which results in users’ browsers being inadvertently redirected to sites hosting malware. Malware: A computer program that is covertly placed onto a computer with the intent of compromising the privacy, accuracy, or reliability of the computer’s data, applications, or operating system. Metasploit: One of the most popular exploitation frameworks. Mutual agreement: The SLA is mutually accepted by the customer and the service provider. Network Forensics: The goal of network forensics is to track, record, and analyse all network activity. In the event of security breaches, cyberattacks, or other cyber incidents, network specialists examine traffic and activities. Netflow Analyzers: Actual traffic across border gateways and within a network is examined in this study A specific thread of activity may be tracked using Net flow, as can the protocol utilized on your network or the assets that are talking with one other. nano: The nano command is use to edit text files Command: Nano Next-generation firewalls (NGFW): NAT gates hide individual IP addresses. As a result, attackers scanning a network for IP addresses are unable to collect detailed details. In the same way that proxy firewalls function as an intermediate between a group of computers and outside traffic, NAT firewalls do the same. NAT Table: NAT table is essential for your internet connection. The network address translation (NAT) table is responsible for allowing devices on a private network to connect to a public network, such as the internet. No encryption / faint encryption: Unsecured or weakly encrypted data transmission applications are capable to attack, for example hijacking session © Secbay Press

375 | Page

Secbay Press Network Based Attacks: Network-based attacks aim to compromise network security by intercepting and altering network traffic or spying on it. These can be active assaults, in which the hacker alters network activity in real time Network Based IDS: A network-based intrusion detection system (NIDS) is used to monitor and analyze network traffic to protect a system from network-based threats. NMAP: Nmap, or Network Mapper, is a free and open source network discovery and security auditing application. Use it for activities like network inventory, service upgrade schedule management, and monitoring host or service uptime. Next Generation Firewall: Next Generation Firewall that offers security intelligence to enterprises and enable them to apply best suited security controls at the network perimeter are also being worked on. Non-Disclosure Agreement: A non-disclosure agreement (NDA) is a legally binding contract that creates a confidential connection. The party or parties signing the agreement agree that sensitive information will not be shared with anyone else. Network Architecture: The way network devices and services are arranged to serve the connectivity demands of client devices is referred to as network architecture. Switches and routers are common network hardware. Non-repudiation: The creator/sender of information cannot later deny his or her intention to convey information. Network based point: It examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service attacks, specific forms of malware and policy Violations New Directory: The mkdir command is used to create a new directory under any directory. Command:-Mkdir Objectives of Computer Forensics: It helps to recover, analyze, and preserve computer and related materials in such a manner that it helps the investigation agency to present them as evidence in a court of law. It helps to postulate the motive behind the crime and identity of the main culprit. Ordinary Files: An ordinary file is a file that holds data, text, or program instructions on a computer system. Optimized: Integrated plans for continuous process improvement. Office of Management and Budget Policy (OMB): The governing body that issued the FedRAMP policy memo which defines the key requirements and capabilities of the program Objectives of The SSP: The SSP is the main document in which the CSP describes all the security controls in use on the information system and their implementation. Provides a global view of how the system is structured OpenVAS: OpenVAS is a vulnerability scanner with a lot of features. Unauthenticated and authenticated testing, different high-level and low-level internet and industrial protocol.

© Secbay Press

376 | Page

Secbay Press OpenID Connect (OIDC): OIDC is a newer open standard that allows users to log in to their applications using an identity provider. It is extremely similar to SAML, however it is built on the OAuth 2.0 standards and uses JSON to communicate data rather than XML, like SAML does. Preservation: Data is segregated, safeguarded, and preserved during this phase. It includes restricting people from utilizing the digital device in order to prevent tampering with digital evidence. Presentation: The process of summarization and explanation of conclusions is completed in this final step. It should, however, be written in layman's words with abstracted terminologies. The precise details should be referenced in all abstracted terms. Protect in Phases of the Incident Response: Developing and implementing suitable safeguards for better delivery of critical infrastructure services passwd: Passwords are required on all Unix systems to ensure that your files and data remain private and that the system is safe from hackers and crackers. The steps to changing your password are as follows: Permission Indicators: While using ls -l command, it displays various information related to file permission as follows Packet-filtering firewalls: It examines packets and prevents them from passing if they do not comply with a set of security rules. The source and destination IP addresses of packets are checked by this form of firewall. Performance Audit: Systematic evaluation of a cloud system by measuring how well it conforms to a setoff established performance criteria Platform Libraries: The Platform Libraries comprises several fundamental C/C++ libraries and Javabased libraries, such Media, Graphics, Surface Manager, OpenGL etc. Passive Network Threats: The passive attack is defined by the unmodified interception of messages. The network data and systems are unaffected. Packet Filtering: Allowing traffic only to certain port numbers. Packet filtering refers to a firewall examining packet addresses and port numbers. It ensure that traffic from IP addresses known to be "evil" is kept out. Project Initiation: Decide on conceptual definition of project Identify security requirements Perform an initial risk analysis (analyze potential threats) Polymorphism: Capability of different objects to respond differently to same message Platform Libraries: Platform Libraries comprises several fundamental C/C++ libraries and Javabased libraries, such Media, Graphics, Surface Manager, OpenGL etc. Privacy Threshold Analysis (PTA): A Privacy Threshold Analysis (PTA) is a questionnaire used to determine: If an information technology system contains Personally Identifiable Information (PII), Platform vulnerabilities: Exploiting vulnerabilities in the OS, Server software, or app modules running on the web server © Secbay Press

377 | Page

Secbay Press Penetration Testing: You can carry out penetration tests against resources on your AWS account per the policies and guidelines at Penetration Testing. Persistent Threats: Attacks exploiting security vulnerabilities for financial gain and criminal agendas continue to dominate headlines. Packet Sniffing: Networks are intended to facilitate and expedite information traffic. To do this, information is delivered in packets over both wired and wireless networks. Policy-based control: Users should be given only the permissions they need to complete their responsibilities. An IAM should be developed to grant users access to resources depending on their employment function, department, or any other relevant attributes. Password Reuse: An old 3rd party database is compromised; your users are still using a compromised password Users using the same password across many accounts Privilege Escalation: When attackers first exploit a system, they generally get access with limited privileges. To get greater and more access, they employ a variety of privilege escalation strategies. Pharming Attacks: A pharming attack is another type of attack that manipulates the DNS name resolution process. It aims to compromise either the DNS server or the DNS client. Platform as a Service (PaaS): Capability to deploy applications in the cloud, Languages, libraries, services, some control by user Password Hashes: Most systems do not save an account's real password. They instead save a hash of the password. Hash attacks target the hash of a password rather than the password itself. Privileged user: A typical privileged user is a system administrator who is in charge of administering an environment, or an IT administrator who is in charge of certain software or hardware. Privilege Management: When users are given the ability to do something that typically only an administrator can do, they have been granted privileges and their account becomes a privileged account Public Key Infrastructure: It is a technology used in the digital world to authenticate individuals and devices. The basic idea is for one or more trusted parties to digitally sign papers confirming that a specific cryptographic key belongs to a specific user or device. PKI Authentication: When a user attempts to authenticate their identity to a server, the server creates and sends random data to the user. The user encrypts the data and sends it back to the server using their private key. PKI Digital Signing: The first step is to compute a hash value for the document to be signed. The hash value is produced using an algorithm, and the output is a digital fingerprint of the file's contents. Remote Terminal Unit (RTU): RTU is a microprocessor-controlled electronic device. This uses telemetry data and messages from its Master Supervisory System to control connected objects to interface objects. Remote telemetry unit and remote control unit may also be used in other words for RTU.

© Secbay Press

378 | Page

Secbay Press Respond in Phases of the Incident Response: Developing and implementing strategies to respond to the detected incidents Recover in Phases of the Incident Response: Developing and implementing a plan to restore the business operations after the occurrence of the incident Remove file or Directories: To remove a directory and all its contents, including any subdirectories and files, use command Rm -r recursive RAW Table: Iptable’s raw table is for configuration exemptions. Raw table has built-in chains. Redsn0w: Redsn0w is a jailbreaking tool for iOS devices that works on both OS X and Windows. Retirement/Disposal: Properly dispose of system Move data to another system or discard accordingly Repeat full cycle with a new project initiation Relational Database models: A DB in the form of tables (rows & columns) related to each other Stores data in such a way that a data manipulation language can be used independently on data uses a database engine (Oracle, Sybase, etc...) Repeatable: A formal structure has been developed including quality assurance. However, no formal process models have been defined. Regulation: Many government and industry regulations, such as HIPAA-HITECH, PCI DSS V2 and Sarbanes-Oxley (SOX), mandate rigorous vulnerability management practices Risk Management: Mature organizations treat it as a key risk management component. Organizations that follow mature IT security principles understand the importance of risk management. Risk Assessment: Larger issues should be articulated in risk-based language (e.g., ISO 27005), with the impact expressed in terms of business impact. Considerations relating to risk reduction and policy compliance should be included in the business case for any remedial action. Remediation: Only when mitigation activity is carried out as a result of the baseline and monitoring functions is security improved. Cross-organizational processes and workflows aid in the remediation process. Root Cause Analysis: It's critical to examine security and vulnerability evaluations. Changes to user administration and system provisioning processes may be required to eliminate the fundamental cause of security flaws. Radio Frequency (RF): Radio frequency is abbreviated as RF. Any frequency in the electromagnetic spectrum that is linked with radio wave propagation is referred to as RF. RF field propagation is used in a lot of wireless technologies. The electromagnetic radiation spectrum includes these frequencies. Rouge access point: Any illegal access point (AP) on a network is referred to as a rouge access point. It might be caused by an attacker or a misunderstanding on the part of an employee. RTO: The RTO is the maximum tolerable length of time that a computer, system, network or application can be down after a failure or disaster occurs. An RTO is measured in seconds, minutes, hours or days. It is an important consideration in a disaster recovery plan (DRP). © Secbay Press

379 | Page

Secbay Press rbash: This Restricted bash shell provides minimal functionality to the person or script running in it. Risk Mitigation: The application of security policies and processes to lower the overall risk or impact of a threat is known as risk mitigation. Risk mitigation in cybersecurity can be divided into three components: Rainbow Attack: Rainbow table assaults are a sort of attack that tries to figure out the password by looking at the hash. A rainbow table is a large database of hashes that have already been generated. Replay Attack: A replay attack occurs when an attacker replays data from a communication session that has previously occurred. A replay attack occurs when a third party tries to imitate a client who was present during the initial session. Role-Based Awareness Training: If you have someone who is new to the organization, you might want to provide some role-based security awareness training. This is typically a specialized sort of training that is tailored to the role that this individual user plays with this program or data. Record your findings: If you have more than five employees in your office, you are required by law to write down your risk assessment process. Your plan should include the hazards you’ve found, the people they affect, and how you plan to mitigate them. Risk Register: The Risk Register records and keeps track of practically all of the hazards that have been recognized and are related to the project. As a result, it keeps track of dangers, including their status and history. Remote virtual desktops model: An image is copied to the local machine, which means a constant network connection is unnecessary. Rogue Access Points: Attackers build physically illegal wireless access points that enable them to access a secure network by depriving network users' connections Role-based access control (RBAC): It Is a commonly used in networks to simplify the process of assigning new users the permission required to perform a job role. Revocation: Deleting all unneeded or unnecessary applications and services from all computer devices. There is usually no reason for every workstation in your firm to be running a mail server, ftp server, and DNS. Security Analysts: Professionals in this field work to restore the network. In an IR team, there are two categories of security analysts: Shell: The shell is the application that handles your requests. It analyses and invoke the commands all commands in the shell follow standard syntax. System Bootup: If you have a computer with the Unix operating system installed, all you have to do is turn it on to get it up and running. The system starts booting up as soon as you turn it on. Then it prompts you to log in. Special Files: Access to hardware such as hard disks, CD-ROM drives, modems, and Ethernet adapters is provided by several special files.

© Secbay Press

380 | Page

Secbay Press Software or hardware firewalls: A software firewall is a program that is installed on each computer. Whereas a physical firewall is a piece of hardware that is located between your network and gateway. Sandboxing: Sandboxing is a cybersecurity practice in which code is executed, observed, and analyzed in a secure, isolated network environment that simulates end-user operating environments. SSLStrip: MITM type attack that leverages SSL/TLS implementation flaws Session Hijacking: Attacker Steal DNS Poisoning Valid Session ID SQL Injection: SQL injection is a type of online security flaw that allows an attacker to interfere with a web application's database queries. It allows an attacker to see data that they wouldn't ordinarily be able to see. Sandboxing Attacks: Sandboxing helps to safeguard systems and users by restricting the resources available to the application on the mobile platform. Sniffing: Attacker reads, monitors, or captures complete packets of data passing between a client and a server. An unencrypted network packet intercepted by a hacker might inflict significant damage to the company or institution. Spoofing: Spoofing is the practice of a bad actor impersonating a legitimate entity or someone they are not. It usually refers to a computer faking an IP address, ARP, or DNS server. Social engineering: Social engineering is a manipulation technique that exploits human error to gain private information, access, or valuables. In cybercrime, Sensitive data storage: Some applications utilize a poor security system in their database design, such that attackers may hack and steal sensitive user information. Software as a Service (SaaS): Applications accessible from client devices, web browser, email, mobile, limited configuration SMS phishing: A social engineering technique used to target victims through SMS messages and may use different motivational techniques like scarcity or fear to entice the victim to perform an action, like clicking on a malicious URL within the message. Spyware: It is a secretly installed malicious code that is intended to track and report usage of a target system or collect data. Such data may include web browsing history, personal information, user names and passwords, and much more. Security Assessment: The security assessment process uses a standardized set of requirements in accordance with FISMA using a baseline set of NIST 800-53 controls to grant security authorizations. SSP- System Security Plan: A formal document that summarizes an information system's security needs and details the security procedures in existence or intended to achieve those criteria. System Inventory: This is a comprehensive inventory of all system components Hardware Software Network Port, Protocols, and Services Single Point of Failure (SPOF): It is a potential risk presented by a weakness in the design, © Secbay Press

381 | Page

Secbay Press implementation, or configuration of a circuit or system in which a single fault or malfunction causes the entire system to fail. Secure access: Securing at the identity level is critical; an IAM should ensure that the identities of those logging in are confirmed. This could include implementing MFA and adaptive authentication. Shell: It is a macro processor which allows for an interactive or non-interactive command execution. Scripting: It allows for an automatic commands execution that would otherwise be executed interactively one-by-one. Secured Protocol and Algorithms: Research in protocols and algorithms is a significant phase for the consolidation of cybersecurity at a technical level. It defines the rules for information sharing and processing over cyberspace. Separation of Duties: The basic principle of separation of duties is that no individual person, role, or group, should be able to execute all parts of a transaction or process. Streamline Data Collection: Again, the most accurate equipment data stems from using sensory meters with CMMS platforms. Systems Administrators Policy: Administrators have privileges and tasks that may require them to come into contact with sensitive, restricted, or personal information while doing their jobs. Single-Loss Expectancy: The expected monetary loss each time an asset is at risk is referred to as single-loss expectation (SLE). It is most typically used during risk assessment and aims to assign a monetary value to each individual threat. Segmentation: One of the best ways to protect sensitive resources is to utilize network segmentation. While there is no limit to the number of zones you can create in general, most networks have the zone types System Isolation: Systems can be isolated from other systems through the control of communications with the device. Software-Defined Networking: In a network, three planes typically form the networking architecture. Management plane this plane administers the router. Single Sign-On (SSO): In an effort to make users more productive, several solutions have been developed to allow users to use a single password for all functions and to use these same credentials to access resources in external organizations. Threat Researchers: The Cyber Threat Researcher will be a member of a global team that will track and analyze threat campaigns. Triage Analysts: They keep digital evidence preserved to conduct forensic investigation against the incident. Threat actor: An individual or group that seeks to harm a business or organization and is motivated through financial, personal, or political gain.

© Secbay Press

382 | Page

Secbay Press Target Value: You may see all the available firewall rules on your system by running iptables –list there are no firewall rules specified on this system, as seen by the following iptables example. TCP-level filtering: Reassembling and inspecting all the packets in each TCP session. It can also be used for DNS filtering and traffic encryption. It well as offers Virtual Private Network (VPN) capability. Tripwire: Tripwire provides a comprehensive, automated view of operational, regulatory, and security compliance throughout the data center's dynamic environment. Tailgating: Tailgating also known as piggybacking occurs when a hacker follows someone with an approved access card into a protected building. This assault assumes that the individual with valid access to the facility is kind enough to keep the door open for the person in front of them, presuming they are permitted to be there. Threats: Threats can come from anywhere and are becoming more sophisticated all the time, making it increasingly difficult to be completely prepared for data breaches. Threat Intelligence: There is a proactive response mechanism in place to deal with cyber threats. Research and Development activities are already underway at various research organizations to fight threats in cyberspace. Threat Protection: Through negligence or malicious intent, employees and third parties with stolen credentials can leak or steal sensitive data from cloud services. To help pinpoint anomalous user behavior, CASBs can compile a comprehensive view of regular usage patterns and use it as a basis for comparison. URL Hijacking: When someone buys a domain name that is similar to a valid domain name, this is known as typo squatting (also known as URL hijacking). People frequently do so with malice in mind. User Policy: A user does not absolve someone of his or her responsibility to become acquainted with the organization's security policy and to uphold it by adhering to all security measures. Virus: Malicious code that spreads from a computer to computer via attaching itself to other files. The code executes when the attached files opened. Vulnerability Assessment: The next stage is to evaluate the environment for known vulnerabilities. This is performed by regular analyses of the environment's vulnerability and configuration. Networkbased vulnerability assessment (VA) has been the primary method employed to baseline networks, servers and hosts. Virtual machines: A virtual machine is a computer file that duplicates the behavior of a real computer. It can run as a distinct computing environment in a window, usually to run a different operating system. Vulnerability management process: The process of finding, analyzing, treating, and reporting security vulnerabilities in systems and the software that runs on them is known as vulnerability management. Vulnerability databases: A vulnerability database (VDB) is a platform for storing, updating, and spreading information regarding computer security flaws that have been discovered. Vulnerability analysis: The goal of this stage is to determine the source and root cause of the © Secbay Press

383 | Page

Secbay Press vulnerabilities discovered in the previous step. It entails determining which system components are responsible for each vulnerability. Virtual: All the network segmentation components discussed thus far separate networks physically with devices such as routers and firewalls, a virtual local area network (VLAN) separates them logically. Virtual Desktop Infrastructure (VDI): Hosts desktop operating systems within a virtual environment in a centralized server. Wireless Forensics: Wireless forensics analyzes and investigates traffic in a wireless environment using specialized tools and procedures. When computer crimes or cyberattacks are perpetrated by breaching security protocols in wireless networks, this type of study is critical. whoami: While you're logged into the system, you might be willing to know: Who am I? The easiest way to find out "who you are" is to enter the whoami command Who is logged in?: Sometime you might be interested to know who is logged in to the computer at the same time. There are three commands available to get you this information, based on how much you wish to know about the other users: users, who, and w. Worms: Malicious code that spreads from computer to computer, within a network, on its own. It ultimately consumes network bandwidth. Wireless Scanner: Wireless vulnerability scanners are used to identify rogue access points and also validate that a company’s network is securely configured. Wireshark: Wireshark is a free and open source network protocol analyzer. It allows users to examine data traffic on a computer network in an interactive manner. The development effort was formerly known as Ethereal. WLAN: WLAN is a network that connects two or more computers using a wireless distribution mechanism. They have high-frequency radio waves and an internet access point (AP). WWAN: WWAN is a WAN with the addition of wireless connection. It offers wireless coverage on a regional, national, and global scale. WMAN: WMAN is a wireless metropolitan area network capable of covering an entire city. It is larger than WLAN but smaller than WWAN. WMAN is controlled by any corporate or public entity. Wireless Networking: Connecting multiple devices without any physical connection. Transfer data through Radio Frequency Wireless Networks Works: Wireless networks use radio frequency (RF) technology When an RF current is applied to an antenna, an electromagnetic field is generated. Wireless Networks Standards: Wireless technology, like the technology connected with it, has changed over time. It's always a good idea to perform your own research before investing in any of these technologies. Wi-Fi Protected Access (WPA): For encryption, most modern WPA implementations employ a pre© Secbay Press

384 | Page

Secbay Press shared key (PSK). To produce keys or certificates, WPA Enterprise use an authentication server. Wi-Fi Protected Access (WPA2): A wireless security standard based on the 802.11i wireless security standard, which was completed in 2004. The adoption of the Advanced Encryption Standard (AES) WPS assaults remains significant in contemporary WPA2-capable access points, as is the case with WPA. Wireless Key Generator: It is a basic application for enhancing network security in the Wireless Key Generator. It invites you to choose the security type and key strength that you use on your wireless network. Zero-Trust Policy: A zero trust policy implies that an organization's IAM solution is always monitoring and securing the identification and access points of its users. Zero-trust standards ensure that each employee is constantly identifiable and their access is regulated. Zero trust centralized administration: This could imply transferring people from other systems or synchronizing your IAM with other user directories. zsh: The Z shell is a modern take on the bash family of shells. It offers neat improvements, like command spellchecks and suggested corrections. Zero-Day Attacks: A zero-day vulnerability is a flaw or issue that has yet to be discovered. A zeroday assault takes use of a previously unknown flaw. Frequently, the vendor is unaware of the problem.

© Secbay Press

385 | Page

Secbay Press

Index A Access control, 13, 85, 166, 246 Account lockout, 69 Account Mismatch, 294 Account mismatches, 17, 293 Active-passive, 193 Additional Information, 150 Advanced cloud services, 11, 26 Advantages, 81 Affinity, 37 Agent-based scans, 95 Alerting, 15, 179 Analysis, 29, 94, 174 Anomalies, 34, 176 Anti-affinity, 37 Antivirus, 12, 73, 80 API request limits, 267 Application, 7, 13, 17, 28, 34, 36, 51 Application delivery controller, 65 Application whitelisting, 70 Applications, 11, 16, 41, 85, 116, 119, 161 Applying Security Patches, 186 Appropriate responses, 180 ARP, 271, 288, 289 Artificial intelligence, 27 Asset management, 15, 185 Automation, 15, 17, 176, 189, 211, 225, 256, 277, 297, 298 Auto-scaling, 13, 45, 119, 200, 321

B Backup and restore policies, 15, 219 Backup objects, 15, 216 Backup targets, 15, 217, 225 Backup Targets, 225 Backup types, 15, 213 Ballooning, 161, 306 Bandwidth, 34, 36, 167, 203, 204, 256, 263 Baselines, 33, 176 Block-Level Access, 121 Blue-green, 51, 193 Buckets, 123 Builds, 12, 75

© Secbay Press

386 | Page

Secbay Press

C Canary, 76, 193 Capabilities, 93, 224 Capacity and Utilization, 199 Certificate management, 12, 57, 244, 248 Chain of Custody, 108, 109 Change management, 15, 17, 184, 248, 294, 317, 324 Chargebacks, 195 Cloud access security broker, 13, 18, 89 Cloud Architecture and Design, 11, 20, 21 Cloud bursting, 46, 200, 302 Cloud DLP, 88 Cloud Element Security, 186 Cloud resources, 114, 115, 178, 324 Cloud within a cloud, 23 Cluster placement, 205 Clusters, 11, 18, 41, 42, 189, 299 Collaboration, 111, 231, 279, 281, 300 Colocation, 206, 230 Common Internet file system, 126 Common messaging methods, 179 Communications, 103, 110, 111 Community, 7, 21, 22 Compliance, 49, 83, 85, 223 Compression, 133, 203, 292, 304 Compute, 15, 18, 38, 48, 85, 114, 115, 116, 117 Configuration files, 216 Configuration management, 15, 185, 209, 302 Configuration management database, 185 Configure variables, 119 Connectivity, 16, 151, 196, 198, 234, 254 Containers, 11, 13, 15, 41, 118, 119, 202, 210, 258, 259, 260 Content delivery network, 141 Corporate Guidelines, 237 Costs, 195, 230 CPU, 14, 18, 31, 35, 36, 37, 38, 39, 48, 75, 135 CPUs, 35, 38, 117, 119, 155, 157, 161, 178, 201, 262 Create internal APIs, 212 Credentialed scans, 94

D Data breaches, 249 Data loss prevention, 13, 18, 65, 88, 292 Data Persistence, 202 Database dumps, 216 © Secbay Press

2 | Page

Secbay Press Database migrations, 14, 169 Deactivation and deletion, 54 Deduplication, 133, 203, 304 Deployment models, 11, 21 Deprecated features, 297, 298 Deprecations or end of life, 183 Destruction, 87 Development Systems, 194 Digital signatures, 82 Directory and identity services, 113 Directory services, 12, 55 DNS, 43, 55, 62, 68, 101, 140, 142, 168 Domain name service, 62 DR kit, 238 Dynamic allocation, 161 Dynamic allocations, 156 Dynamic and static routing, 148 Dynamic host configuration protocol, 137 Dynamic routes, 272

E Edge computing, 205 Elasticity usage, 196 Elasticity Usage, 198 Email, 111, 179, 180 Enable/disable alerts, 180 Encryption, 12, 63, 76, 78, 79, 251, 275 Eradication, 104, 106, 107 Event Correlation, 180 Exposed endpoints, 16, 252

F Failback, 16, 229, 313 Failovers, 16, 227 Feature Additions and Deletions, 184 Federation, 12, 18, 56, 57, 314 Fiber Channel, 121, 127, 128, 168, 279 File, 18, 34, 36, 71, 82, 110, 115, 122, 126 File integrity monitoring, 82 File storage, 115, 263 File subscriptions, 110 Filesystem, 76 Firewall, 78, 93, 120, 151, 268, 269, 273, 274, 317 Firewalls, 44, 64, 67, 78, 150, 151, 166 Firmware upgrades, 67 © Secbay Press

3 | Page

Secbay Press Forward proxy, 18, 68 Functional testing, 52

G Generic, 61, 64, 153, 154, 206 Generic routing encapsulation, 64 Geographical, 16, 205, 238 GPU, 14, 158, 159, 201, 291, 292 Graphics processing unit, 14, 158

H Hardened baselines, 12, 74 Hardware, 28, 29, 48 Horizontal scaling, 45, 200 Host-based IPS, 12, 18, 73, 74 Hot fixes, 97 Hotfix, 191, 313 Hub and spoke, 152 Hybrid, 21, 22, 124, 152, 163 Hyperconverged, 14, 136, 160 Hypertext transfer protocol secure, 63 Hypervisors, 11, 36, 187, 299

I Identity Management, 118, 137 Image Backups, 224 Incorrect IP space, 271 Infrastructure as a Service, 19, 24, 25, 30, 88, 111, 164 Infrastructure as code, 15, 207 Infrastructure components, 178, 208 Insecure ciphers, 250 Instance Types, 18, 201 Insufficient capacity, 17, 262 Integration, 48, 151, 154, 164, 210, 266 Integrity, 12, 77, 80, 223 Internet of Things, 26 IOPS, 14, 39, 125, 134, 203, 303 IP Address Changes, 295 IP address management, 142 Isolation, 108, 210

J Job validation issue, 17, 299

© Secbay Press

4 | Page

Secbay Press

L Latency, 197, 199, 204, 256 Law Enforcement, 105 Least Privilege, 79, 243 Legal hold, 86 Licensing, 11, 17, 31, 32, 265, 309 Life-cycle management, 15, 182 Lightweight directory access protocol, 18, 56 Linux Permissions, 79 Load balancers, 43, 150, 275, 276, 299 Local Backups, 226 Location changes, 17, 296 Log scrubbing, 177 Logical access management, 53 Long-term support, 75 LUN Masking, 121

M Mandatory access control, 12, 18, 77, 323 Maximum transmission unit, 278 Mean time to recover, 222 Memory, 14, 29, 48, 127, 156, 157, 160, 161, 201, 293, 322 Migrations, 184, 266 Misconfigured templates, 16, 261 Monitoring, 15, 104, 164, 167, 176, 178, 202, 231, 256, 257, 273, 298 Multicloud, 23 Multifactor authentication, 12, 58, 314, 323 Multiprotocol label switching, 145 Multitenancy, 22, 24

N netstat, 286, 287 Network, 5, 6, 12, 14, 15, 16, 17, 18, 19 Network access control, 65 Network flows, 12, 66 Network requirements, 49 Network segmentation, 12, 60 Network time protocol, 62 Non-relational, 171, 172

O Object, 115, 123, 124, 168, 217, 218, 263 Object storage, 115, 123, 124, 263 Office 365, 26, 32 © Secbay Press

5 | Page

Secbay Press Open source, 207, 320 Operations and Support, 14, 20, 173 Orchestration sequencing, 15, 213 OS templates, 117 Overall utilization, 197 Oversubscription, 11, 37, 38, 157, 264

P PaaS or SaaS migrations, 164 Packet brokers, 66 Password complexity, 69 Password vaults, 213 Patch, 96, 186, 189, 191 Patching, 15, 17, 74, 96, 97, 99, 186, 189, 299, 309 Patching Tools, 96, 97 Patterns, 33 Peering, 152, 153, 269 Penetration testing, 52, 315, 323 Per user, 31 Performance degradation, 16, 255 Performance monitoring, 178 Performing a Server Migration, 168 Performing Rolling Updates, 195 Persistent storage, 119 Physical to virtual, 14, 162 Ping, 282, 283 Pktmon, 93 Platform as a Service, 24, 25, 27, 30, 111, 114, 164, 165 Playbook, 209, 238 Point-to-point, 64, 145 Point-to-site, 145 Port scanners, 91, 92, 253 Port Scanning, 92 Port Scanning Basics, 93 Private, 18, 21, 22, 79, 138, 144, 151, 272, 275, 276, 282, 302 Privilege, 16, 243, 244 Privileged access management, 53 Production Systems, 190 Protocols, 12, 14, 62, 121, 125, 128, 143, 144, 149, 250, 275 Provisioning resources, 13, 114 Proxy servers, 67 Public, 12, 18, 21, 59, 60, 79, 83, 117, 198, 230, 254, 256, 302, 323

Q Quality assurance, 50, 194 © Secbay Press

6 | Page

Secbay Press Quality Assurance, 194 Quality of service, 279, 280, 281, 300, 322 Quality of Service, 146, 279, 280, 281, 300

R RAID, 14, 18, 123, 129, 130, 131, 132, 189, 221, 312 Records management, 13, 86 Recovery, 104, 107, 220, 221, 224, 225, 230, 231, 232, 237 Redundancy, 43, 205, 222, 226 Redundant array of inexpensive disks, 14, 129 Regression testing, 52 Regulatory and Law Changes, 175 Remote Backups, 227 Replication, 16, 134, 135, 233, 292, 318 Requirements, 11, 16, 29, 46, 236 Resource utilization, 17, 178, 291 Restoration methods, 16, 223 Restore backups, 16, 232 Restore files, 224 Retention, 86, 219 Rightsizing, 15, 200 Risk register, 13, 98 Roadmaps, 182 Roles and Responsibilities, 103 Rollback, 189, 191, 192, 224 Rolling upgrades, 192 Root Cause Analysis, 109 Route, 138, 287 Routers, 42, 138, 149, 272, 280, 281, 300 Routes, 149, 151, 271 Routine operations, 211 Routing, 50, 61, 148, 149

S Scalability, 11, 34, 36, 45, 146, 202, 210, 211, 223, 226, 228, 231 Scheduled Patch, 97 Scripts, 256 SDN, 14, 147, 154, 205 Secret management, 12, 60 Secure scripting, 15, 212 Secure Shell, 63, 64, 250, 319 Security assertion markup language, 59 Security groups, 16, 246 Security patches, 13, 96, 186 Segmentation, 13, 84, 85 © Secbay Press

7 | Page

Secbay Press Server less, 116, 169 Serverless, 27, 116 Service availabilities, 95 Service models, 11, 18, 24 Service overload, 293 Service-level agreement, 49 Severity categorization, 175 Show backs, 196 Shutdowns, 211 Signature updates, 98 Simultaneous multi-threading, 155 Single function, 75 Single sign-on, 12, 59 Site-to-site, 145 Snapshot, 215, 224, 225, 316 Socket Based, 31, 32 Software as a Service, 24, 26, 88, 90, 110, 111, 114, 164, 166, 188 Software feature, 70 Software firewall, 12, 78 Software-defined network, 14, 154 Software-defined storage, 14, 137 Solution templates, 118 Spinning Disks, 124 Static routes, 272 Storage, 14, 15, 28, 34, 36, 39, 76, 97 Subnetting, 50, 148, 150 Subscription Based, 31, 32 Subscription services, 13, 110 Switches, 42 Syslog, 173, 174, 175 System load, 11, 33

T Tabletops, 101 Tagging, 177, 195, 198 Tcpdump, 290 Testing techniques, 12, 51 Thick Provisioning, 133 Thin Provisioning, 134 Thresholds, 177 Tiers, 13, 124, 203, 276 Time Considerations, 300 tracert / traceroute, 283 Transport layer security, 63 Trend analysis, 11, 33, 308 Trending, 176 © Secbay Press

8 | Page

Secbay Press Tunneling proxy, 67

U Unencrypted data, 249 Unsupported protocols, 16, 253 Upgrade methods, 15, 192 Upgrades, 183 Usability testing, 52 User density, 11, 32 User permissions, 12, 70 User quotas, 14, 135

V Vendor lock-in, 163 Vendor-related issues, 17, 266 Version control, 15, 208 Version Update, 191 Versioning, 86 Vertical scaling, 46, 200 Vertically Scaling, 46 Virtual, 9, 14, 18, 36, 37, 43, 61, 85, 96, 98, 112, 113, 143 Virtual desktop infrastructure, 112 Virtual patches, 98 Virtual private networks, 14, 143 Virtual routing, 14, 148 Virtualization, 14, 28, 61, 143, 153, 154, 155, 162, 233 Voice over IP, 111, 280, 281, 301 Vulnerability assessment, 13, 93 Vulnerability scanners, 91 Vulnerability testing, 51

W Warm, 235 Web Application Firewall, 65, 150 Whitelisting or blacklisting, 67 Windows Permissions, 80 Work Validation Issues, 299 Write once read many, 87

Z Zoning, 120, 122, 304, 311, 319

© Secbay Press

9 | Page

Secbay Press

Additional Resources: In addition to the core guidelines for using this book effectively, there are supplementary resources that can further enhance your learning experience: Check for Supplementary Online Resources: Ensure that the textbook includes complimentary online materials like practice exams, mock tests, flashcards, or discussion forums. These resources can significantly enhance your understanding and enable interaction with fellow students and instructors. You can access the complimentary Cloud+ on-line practice exams, mock exams, and readable version of flashcards at no extra cost: To available this offer please fill the form Click here to fill the form [ https://examit.com/secbay-press-book-special-bundle/ ] Fill the Form and mention your order id and Coupon Code: “ cloud+secbaypressuqs” in the form If you’re ordered ID and Coupon Code Matches we will send you free discount code within 48 hours. Use the Discount Code while checkout the CompTIA Cloud+ practice, mock, and flashcard special bundle Course Once you complete these steps, you’ll receive a full discount, and you won’t need to pay anything to access these exams. Please note that the mock exam has a limited number of attempts; refer to the website for more details.

© Secbay Press

2 | Page