The Best of TaoSecurity Blog, Volume 1: Milestones, Philosophy and Strategy, Risk, and Advice [1/2, 1 ed.] 1952809010, 9781952809019

Since 2003, cybersecurity author Richard Bejtlich has been writing posts on TaoSecurity Blog, a site with 15 million vie

401 27 1MB

English Pages [371] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Best of TaoSecurity Blog, Volume 1: Milestones, Philosophy and Strategy, Risk, and Advice [1/2, 1 ed.]
 1952809010, 9781952809019

Table of contents :
The Best of TaoSecurity Blog, Volume 1
Title Page
Copyright
Dedication
Epigraph
Preface
Chapter 1. Milestones
Introduction
First Post and Review of BGP Posted
Sguil User Six
Trying New Martial Arts School
Five Years Ago Today...
The Tao of NSM Is Published!
TaoSecurity Visits the Pentagon
Security Responsibilities
Bejtlich Joining General Electric as Director of Incident Response
Bejtlich Cited in Economist
TaoSecurity Blog Wins Best Non-Technical Blog at RSA
Inside a Congressional Hearing on Digital Threats
Become a Hunter
TaoSecurity Blog Wins Most Educational Security Blog
Bejtlich Books Explained
Latest Book Inducted into Cybersecurity Canon
Twenty Years of Network Security Monitoring: From the AFCERT to Corelight
Conclusion
Chapter 2. Philosophy and Strategy
Introduction
Prevention Always Fails
What is the Ultimate Security Solution?
Thoughts on Digital Crime
Further Musings on Digital Crime
How to Misuse an Intrusion Detection System
Soccer Goal Security
Further Thoughts on Engineering Disasters
More on Engineering Disasters and Bird Flu
Thoughts on Patching
Why Prevention Can Never Completely Replace Detection
Analog Security is Threat-Centric
Control-Compliant vs Field-Assessed Security
Of Course Insiders Cause Fewer Security Incidents
National Digital Security Board
Security Is Not Refrigeration
Response to Daily Dave Thread
Incorrect Insider Threat Perceptions
How Many Spies?
What Do I Want
Proactive vs Reactive Security
Taking the Fight to the Enemy
Threat Deterrence, Mitigation, and Elimination
FISMA Dogfights
Fight to Your Strengths
Vulnerability-Centric Security
Threat Model vs Attack Model
Kung Fu Wisdom on Threats
Change the Plane
Does Failure Sell?
Security: Whose Responsibility?
Response: Is Vulnerability Research Ethical?
On Breakership
Humans, Not Computers, Are Intrusion Tolerant
Speaking of Incident Response
Defender's Dilemma vs Intruder's Dilemma
Offense and Defense Inform Each Other
The Centrality of Red Teaming
The Problem with Automated Defenses
Incident Detection Mindset
Protect the Data Idiot!
Protect the Data from Whom?
Protect the Data -- Where?
Protect the Data -- What Data?
Cyberwar Is Real
Over Time, Intruders Improvise, Adapt, Overcome
Redefining Breach Recovery
Forcing the Adversary to Pursue Insider Theft
Know Your Limitations
Seven Security Strategies, Summarized
Conclusion
Chapter 3. Risk
Introduction
The Dynamic Duo Discuss Digital Risk
Calculating Security ROI Is a Waste of Time
Ripping Into ROI
SANS Confuses Threats with Vulnerabilities
Risk, Threat, and Vulnerability 101
Cool Site Unfortunately Miscategorizes Threats
BBC News Understands Risk
Organizations Don't Remediate Threats
Return on Security Investment
Risk Mitigation
Three Threats
Security Is Still Loss Avoidance
No ROI for Security or Legal
Are the Questions Sound?
Bank Robber Demonstrates Threat Models
No ROI? No Problem
Security ROI Revisited
Glutton for ROI Punishment
Is Digital Security "Risk" a Knightian Uncertainty?
Vulnerabilities in Perspective
More Threat Reduction, Not Just Vulnerability Reduction
Unify Against Threats
Risk Assessment, Physics Envy, and False Precision
Attack Models in the Physical World
Conclusion
Chapter 4. Advice
Introduction
CISSP: Any Value?
My Criteria for Good Technical Books
What the CISSP Should Be
Answering Penetration Testing Questions
No Shortcuts to Security Knowledge
Starting Out in Digital Security
Reading Tips
Security in the Real World
What Should the Feds Do
Why Digital Security?
US Needs Cyber NORAD
Controls Are Not the Solution to Our Problem
Answering Reader Questions
Getting the Job Done
Is Experience the Only Teacher in Security?
Why Blog?
Defining the Win
Advice to Bloggers
How Much to Spend on Digital Security
Partnerships and Procurement Are Not the Answer
Everything I Need to Know About Leadership I Learned as a Patrol Leader
Stop Killing Innovation
All Reading Is Not Equal or Fast
Answering Questions on Reading Tips
Five Qualities of Real Leadership
I Want to Detect and Respond to Intruders But I Don't Know Where to Start!
Understanding Responsible Disclosure of Threat Intelligence
Don't Envy the Offense
How to Answer the CEO and Board Attribution Question
My Federal Government Security Crash Program
Notes on Self-Publishing a Book
Managing Burnout
COVID-19 Phishing Tests: WRONG
When You Should Blog and When You Should Tweet
Conclusion
Afterword
Books By This Author
About The Author
Version History

Citation preview

The Best of TaoSecurity Blog, Volume 1

The Best of TaoSecurity Blog, Volume 1 Milestones, Philosophy and Strategy, Risk, and Advice Richard Bejtlich

TaoSecurity Press

Copyright © 2020 Richard Bejtlich and TaoSecurity Press Trademarked names may appear in this book. Rather than use a trademark symbol with each occurrence of a trademarked name, names are used in an editorial fashion with no intention of infringement of the respective owners’ trademarks. This is a book about digital security and network monitoring. The act of collecting network traffic may violate local, state, and national laws if done inappropriately. The tools and techniques explained in this book should be tested in a laboratory environment, separate from production networks. None of the tools or techniques should be tested with network devices outside of your responsibility or authority. Suggestions on network monitoring in this book shall not be construed as legal advice. The author has taken care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher. ISBN: 978-1-952809-00-2

I dedicate this book to my family.

I propose to fight it out on this line, if it takes all summer. General Ulysses S. Grant, Spotsylvania campaign, 11 May 1864

Contents The Best of TaoSecurity Blog, Volume 1 Title Page Copyright Dedication Epigraph Preface Chapter 1. Milestones Introduction First Post and Review of BGP Posted Sguil User Six Trying New Martial Arts School Five Years Ago Today... The Tao of NSM Is Published! TaoSecurity Visits the Pentagon Security Responsibilities Bejtlich Joining General Electric as Director of Incident Response Bejtlich Cited in Economist TaoSecurity Blog Wins Best Non-Technical Blog at RSA Inside a Congressional Hearing on Digital Threats Become a Hunter TaoSecurity Blog Wins Most Educational Security Blog Bejtlich Books Explained Latest Book Inducted into Cybersecurity Canon Twenty Years of Network Security Monitoring: From the AFCERT to Corelight Conclusion Chapter 2. Philosophy and Strategy Introduction Prevention Always Fails What is the Ultimate Security Solution? Thoughts on Digital Crime

Further Musings on Digital Crime How to Misuse an Intrusion Detection System Soccer Goal Security Further Thoughts on Engineering Disasters More on Engineering Disasters and Bird Flu Thoughts on Patching Why Prevention Can Never Completely Replace Detection Analog Security is Threat-Centric Control-Compliant vs Field-Assessed Security Of Course Insiders Cause Fewer Security Incidents National Digital Security Board Security Is Not Refrigeration Response to Daily Dave Thread Incorrect Insider Threat Perceptions How Many Spies? What Do I Want Proactive vs Reactive Security Taking the Fight to the Enemy Threat Deterrence, Mitigation, and Elimination FISMA Dogfights Fight to Your Strengths Vulnerability-Centric Security Threat Model vs Attack Model Kung Fu Wisdom on Threats Change the Plane Does Failure Sell? Security: Whose Responsibility? Response: Is Vulnerability Research Ethical? On Breakership Humans, Not Computers, Are Intrusion Tolerant Speaking of Incident Response Defender's Dilemma vs Intruder's Dilemma Offense and Defense Inform Each Other The Centrality of Red Teaming The Problem with Automated Defenses Incident Detection Mindset Protect the Data Idiot!

Protect the Data from Whom? Protect the Data -- Where? Protect the Data -- What Data? Cyberwar Is Real Over Time, Intruders Improvise, Adapt, Overcome Redefining Breach Recovery Forcing the Adversary to Pursue Insider Theft Know Your Limitations Seven Security Strategies, Summarized Conclusion Chapter 3. Risk Introduction The Dynamic Duo Discuss Digital Risk Calculating Security ROI Is a Waste of Time Ripping Into ROI SANS Confuses Threats with Vulnerabilities Risk, Threat, and Vulnerability 101 Cool Site Unfortunately Miscategorizes Threats BBC News Understands Risk Organizations Don't Remediate Threats Return on Security Investment Risk Mitigation Three Threats Security Is Still Loss Avoidance No ROI for Security or Legal Are the Questions Sound? Bank Robber Demonstrates Threat Models No ROI? No Problem Security ROI Revisited Glutton for ROI Punishment Is Digital Security "Risk" a Knightian Uncertainty? Vulnerabilities in Perspective More Threat Reduction, Not Just Vulnerability Reduction Unify Against Threats Risk Assessment, Physics Envy, and False Precision Attack Models in the Physical World Conclusion

Chapter 4. Advice Introduction CISSP: Any Value? My Criteria for Good Technical Books What the CISSP Should Be Answering Penetration Testing Questions No Shortcuts to Security Knowledge Starting Out in Digital Security Reading Tips Security in the Real World What Should the Feds Do Why Digital Security? US Needs Cyber NORAD Controls Are Not the Solution to Our Problem Answering Reader Questions Getting the Job Done Is Experience the Only Teacher in Security? Why Blog? Defining the Win Advice to Bloggers How Much to Spend on Digital Security Partnerships and Procurement Are Not the Answer Everything I Need to Know About Leadership I Learned as a Patrol Leader Stop Killing Innovation All Reading Is Not Equal or Fast Answering Questions on Reading Tips Five Qualities of Real Leadership I Want to Detect and Respond to Intruders But I Don't Know Where to Start! Understanding Responsible Disclosure of Threat Intelligence Don't Envy the Offense How to Answer the CEO and Board Attribution Question My Federal Government Security Crash Program Notes on Self-Publishing a Book Managing Burnout COVID-19 Phishing Tests: WRONG

When You Should Blog and When You Should Tweet Conclusion Afterword Books By This Author About The Author Version History

Preface The purpose of this book is to extract and highlight my favorite posts from the TaoSecurity Blog, from 2003 to mid-2020. While all of these posts are available for free online, without advertising, they have become increasingly difficult to find. As of mid-2020, TaoSecurity Blog features over 3,050 posts, and despite being hosted by Google’s Blogspot property, lacks sufficient search capability for the average visitor. When I know that I’m having trouble finding posts, then I expect readers are suffering the same limitations. In the course of doing research for one of my personal hobbies, namely the Martial History Team (martialhistoryteam.org), I’ve realized that books possess a permanence not found in blogs or other digital media. I’ve enjoyed looking at scans and other representations of books published in the late 19th and early 20th centuries. I’ve looked for books through the global WorldCat database and learned only a few copies exist, according to that repository. Nevertheless, they do exist, and in some cases I can request them via the InterLibrary Loan system. Long after blogs and other social media content has disappeared, books will remain in someone’s library, waiting to tell their story. I posted my first blog entry on January 8, 2003. (I normally provide dates in military format, e.g., 8 January 2003, but Blogger uses the Month Day, Year format. Rather than change them all manually, I’ve adopted that convention here.) I had already been reviewing cybersecurity books from my personal library, having read and reviewed 24 books on Amazon in 2002. I decided to try promoting those reviews via a blog, which was a new form of communication in the early 2000s. In early 2003 I was a consultant for Foundstone’s incident response team, working for Kevin Mandia. Foundstone encouraged its consultant to write, speak, teach, and otherwise get the message out about our cybersecurity capabilities. The company had essentially been launched by one of the bestselling, if not *the* best-selling, cybersecurity books of all time: Hacking

Exposed, first published in the fall of 1999. In 2002 I had contributed a case study on network security monitoring (NSM) for the fourth edition of Hacking Exposed, published in early 2003. Soon thereafter I began research for my first book, The Tao of Network Security Monitoring: Beyond Intrusion Detection, which Addison-Wesley (Pearson) published in the summer of 2004. During the next 17 years I changed companies and roles but continued blogging. After McAfee bought Foundstone I moved to ManTech, where I worked on a team supporting a national offensive mission. From there I became a full-time independent consultant, offering NSM via TaoSecurity LLC. A blog post (featured in the Milestones chapter) in 2007 attracted the attention of my next boss, Grady Summers, who hired me to create and lead the General Electric Computer Incident Response Team (GE-CIRT). In 2011 I migrated to Mandiant, reunited with friends from Foundstone, and served as its first and only Chief Security Officer. After FireEye acquired Mandiant, I stayed for a few years, but eventually left and more or less took a break from the security scene for a year. My blogging suffered as I was burned out and felt that I had already written what I needed to say. I included my blog post about burnout in this compendium. After joining Corelight as a strategist in mid-2018, I began blogging for them, and as a result did not often write for TaoSecurity Blog. I composed this book by reviewing all 3,050+ blog posts on TaoSecurity blog, tagging the “top candidates” for inclusion in this book with the “topcan” label. (That label is reachable at https://taosecurity.blogspot.com/search/label/topcan and applies to over 370 posts, approximately 12% of the total.) I then manually copied each post to a Google document and sorted them according to twelve categories, which form the chapters of the three volumes in this series of books. Roughly speaking, those posts consist of 192,000 words, which, if they are a representative sample of the overall posts in the blog, would equate to about 1.6 million words in the entire TaoSecurity Blog corpus. I believe that is an exaggerated amount, as many of my early posts were much shorter, before the age of Twitter. Furthermore, I’ve omitted many of the technical posts, as I don’t believe

that command line output or packet captures are representative of true “words” authored by me. Therefore, I estimate that I’ve probably written about 1 million words for TaoSecurity Blog over the 17 years of its existence. This book, by and large, only incorporates the text from the selected posts. There are many cases where I originally linked to material created by others, and I did not want to violate any copyright holders in a commercial work such as this. I’ve also omitted all of the URLs mentioned in the posts. Given the age of the source material, most original URLs point to dead links, and I was not interested in tracking down replacements in the remote expectation that a reader might want to follow a source. If that is the case, however, each entry in this book includes a URL for the original blog post. Duly motivated readers can begin their research there, should they be so inclined. In reproducing the posts in this format, I’ve chosen to fix some typos and make other minor obvious fixes. However, I have not altered my point of view from earlier posts, however cringe-worthy they might appear to me now. It’s clear that in my early days in the security world, I was heavily influenced by the so-called “hacker mentality,” and did not moderate my views until I had spent more time working for the victims of various intrusions. My point of view changed substantially after spending time with under-resourced, under-staffed, politically outmaneuvered security teams, whether I helped as a consultant or as a member of an enterprise security function. I’ve concluded that too many people, especially on the offensive side of the security equation, would be better served if they were responsible for the digital assets they seem so intent on breaking. Too many so-called “hackers” lack sympathy for the lives affected by their desire to break software. Blog comments are not reproduced here either. While a few posts over the years featured thoughtful commentary, most did not. At some point during the blog’s history I had to enable comment moderation. I was shocked by those who submitted comments that exhibited foul and racist language, personal attacks, and other disgusting content. The world is better off without a platform for their idiocy, although most of them have unfortunately migrated to Twitter. If for some reason you’re wondering if a post in this

book had comments, please follow the cited link. I’ve added commentary to all of the blog posts. These comments indicate how I feel about the material, looking back from 2020. In some cases I note with despair the attitude I previously projected. In other cases I augment the message that I first promoted. TaoSecurity Blog is one of the oldest cybersecurity blogs still around. Bruce Schneier’s Crypto-Gram newsletter began in 1998, and adopted the blog format a few years later. I can’t think of another author who began back then and is still publishing blog format material at this point. I’m almost in that category, as I blog mostly for Corelight these days, but once in a while an issue bothers me enough to require expression through a blog post at TaoSecurity Blog. Expression is the key theme of my blog and this book. The purpose of my blogging, writing, and speaking has largely been to capture my thoughts on a topic. If others benefitted from the content, then that was a bonus. I was usually more interested in codifying my thoughts into a form worth reading in the future. Many times over the years I’ve referred back to my own material in order to learn how to accomplish a task or how to think about a certain problem. I was happy to see the Security Bloggers association give TaoSecurity Blog the “Best Non-Technical Blog” award for 2009 and the “Most Educational Security Blog” award for 2012. The blog has also been featured in various lists over the years, but that is not something I’ve tracked. As of April 2020, the five most popular posts, since January 2011 when Blogger began offering native statistics, are as follows: 60,622: Five Reasons I Want China Running Its Own Software (Mar 23, 2017) 58,225: Cybersecurity Domains Mind Map (Mar 21, 2017) 52,276: A Brief History of the Internet in Northern Virginia (Dec 23, 2015) 50,540: The Missing Trends in M-Trends 2017 (Mar 15, 2017) 49,448: Domain Creep? Maybe Not. (Dec 10, 2015) Of those, only the first and fourth appear in my catalogue of selected

posts. Popularity isn’t everything! I do not write to be popular, but I am pleased that some people find my blog helpful. Since January 2011, the blog has had over 15 million views, but I imagine the bulk of that audience has never read the earlier posts, many of which are foundational elements of my thinking not present in my published texts. Some of the content has aged well, and some of it has not. I’ve tried to preserve material in this book that is useful, regardless of when it was written. For that reason, much of the “technical” material has been omitted. For example, the online TaoSecurity Blog features over 430 posts with the label “FreeBSD,” meaning they have something to do with that Unix-like operating system. Early in my career I was a keen FreeBSD user, and I often wrote about how to accomplish various tasks using that software. When I stopped writing about FreeBSD, some of my readers complained. I didn’t care. I wrote for myself and if the complainers wanted that content, they could try their hand at writing. At this point, much of that material is no longer relevant, and if it might be to some readers, it remains a Google search or blog URL away. In the process of assembling this volume and writing the commentary, I realized that there was far too much material for a single, big book. I therefore split the material into three volumes. In this book, I cover milestones, philosophy and strategy, risk, and advice. Future volumes will include network security monitoring, technical notes, research, China and the APT, current events, law, wise people, and history, with some degree of appendices and references as well. And now, before turning to the blog, I leave the introduction with the immortal words attributed to Steve Jobs: “Real artists ship.” -Attributed to https://quoteinvestigator.com/2018/10/13/ship/

Steve

Jobs,

Richard Bejtlich Northern Virginia, 2020

Chapter 1. Milestones

Introduction This chapter contains posts which represented various moments where the course of my blogging life changed, usually for the better. It also contains entries which I felt marked a noteworthy moment for the blog, and perhaps did not strictly belong in another category.

First Post and Review of BGP Posted Wednesday, January 08, 2003 Welcome to my blog! The main new content will be news of book reviews that I've had published at Amazon.com. In 2002 I read and reviewed 24 books on computer security topics. Most recently, these included The Art of Deception: Controlling the Human Element of Security by Kevin Mitnick and The Hacker Diaries: Confessions of Teenage Hackers by Dan Verton. My first published review of 2003 is a four star review of BGP (O'Reilly, Sep 2002) by Iljitsch Van Beijnum. You can see my book reading (and reviewing) schedule by visiting www.bejtlich.net/reading.html. I will no longer try to review every security book which hits the shelves! That was a pipe dream, even when I started reading these sorts of books in 1998. The books I add to my schedule either address a topic about which I need to know more, or offer original content by an interesting author. Thank you for visiting! Richard Bejtlich https://taosecurity.blogspot.com/2003/01/welcome-to-my-blog-mainnew-content.html Commentary This was my first blog post. I had very modest plans when I started, concentrating on promoting the book reviews I was writing on Amazon.com. I eventually became a top 500 book reviewer for that site. I used to read and review dozens of technical books per year (17 in 2000, 42 in 2001, 24 in 2002, 33 in 2003, 33 in 2004, 26 in 2005, 52 in 2006, 25 in 2007, 20 in 2008, 15 in 2009, 31 in 2010, 22 in 2011 -- 340 in total). In mid-2012 I decided that technical books no longer captured my interest, and I focused on books about history, strategy, policy, and martial arts. The archive.org repository of pages for www.bejtlich.net/reading.html shows what I used to read. The link is no

longer active.

Sguil User Six Tuesday, February 18, 2003 According to my friend Bamm Visscher, I just became user number six of Sguil, an interface for the Snort intrusion detection engine. It's in early alpha stages but it smokes everything else available. It's built BY an analyst FOR an analyst. I spent a chunk of the weekend writing this 4 MB installation guide pdf for it. The 13 MB sguil_complete_17_feb_03.tar archive I mention in the installation guide can be downloaded here, for now. There is also a Sourceforge site. Enjoy! https://taosecurity.blogspot.com/2003/02/according-to-my-friend-bammvisscher-i.html Commentary I am very proud to have been associated with the Sguil project, even though my code contribution was one or two lines that I believe Bamm rejected anyway! I still use Sguil to this day, as it is the most informationdense way to review alerts generated by an intrusion detection engine like Suricata, and it provides right-click access to full content data access via Wireshark. At this time I was working as an incident response consultant for Kevin Mandia at Foundstone, and I believe I may have used Sguil during some engagements where I had to build my own network security monitoring sensors.

Trying New Martial Arts School Monday, April 28, 2003 I finally joined a new martial arts school in northern Virginia. It's been two years since I broke my wrist and stopped formal training, and about seven months since my last organized martial arts activity. https://taosecurity.blogspot.com/2003/04/i-finally-joined-new-martialarts.html Commentary I was surprised to find this entry. At this point in the blog’s progression, I had not yet instituted the fairly strict rules I would later follow, namely keeping the blog on topic. I recommend this strategy for anyone trying to organize their thoughts in written form in a public-facing medium. To this day I have TaoSecurity Blog for cyber security, intelligence, and military history; Rejoining the Tao Blog for my martial arts journey; and Martial History Team for promoting sound evidence and sourced research on martial arts topics.

Five Years Ago Today... Tuesday, September 23, 2003 Five years ago today I left the information warfare planning directorate at Air Intelligence Agency and joined the Air Force Computer Emergency Response Team at then-Kelly Air Force Base in San Antonio, Texas. Back then we were part of the Air Force Information Warfare Center, tasked with monitoring all of the intrusion detection systems deployed inside border routers at Air Force's installations. I was a new captain and had voluntarily attended some UNIX training after work hours while deployed to RAF Molesworth in late 1997. Just yesterday I was asked how to get into the computer security field. Here's how I did it. I looked at the AFCERT's manning roster for the network security monitoring teams and put myself on the schedule. Wherever I saw an opening -- usually between 2 and 10 pm or 10 pm and 6 am -- I added my name. I sat next to people who seemed to understand the alerts they were analyzing and asked a lot of questions. Six months later I was in charge of the real-time NSM team, and a year later I was in charge of all NSM operations. I wrote my first white paper in late 1999 and spoke at my first SANS conference on 25 Mar 00. Currently I'm writing Real Digital Forensics and The Tao of Network Security Monitoring, both to be published in 2004. https://taosecurity.blogspot.com/2003/09/five-years-ago-today.html Commentary This was the first of several posts that look back on my time in the Air Force. Writing now in 2020, it’s stunning to remember a time when I had only five years of hands-on technical security experience. I notice that I also mentioned the publication process for my first two books, the Tao of Network Security Monitoring, published in 2004, and Real Digital Forensics, co-authored with Keith Jones and Curtis Rose, published in 2005.

The Tao of NSM Is Published! Friday, July 16, 2004 My wife found a copy of my book left in our garage today by the UPS or Fedex delivery person! I'm very happy to see it in print. Four years ago Karen Gettman from Addison-Wesley approached me about writing a book. Initially I wanted to write "Intrusion Detection and Incident Response Illustrated," but I decided to wait until I felt I was ready. At Black Hat last year, I met my editor Jessica Goldstein from AddisonWesley. I presented the proposal I had worked on all of the previous night. About a month later I signed a contract, and by March of this year submitted my draft of the text. Now, less than a year after that Black Hat meeting, I have a copy of my book in hand. Thank you to every who assisted -- you're all in the preface! Some of you will be getting review copies soon. I expect to see the book available from online booksellers next week, and in stores before the end of the month. Please send feedback to blog at taosecurity dot com. Update: I asked my publisher why Amazon.com isn't currently selling my book at a discount. She wrote: "Amazon is having a data feed problem, and that is why your book isn't discounted. Many new books on Amazon are showing for list price, which is incorrect. They are working with the vendor who is sending them the bad data and are trying to get it fixed." Expect to see the price drop at Amazon.com shortly. https://taosecurity.blogspot.com/2004/07/tao-of-nsm-is-published-mywife-found.html Commentary

The Tao remains my magnum opus, despite any attempt to create something better. It was the right book at the right time. I decided to write it in 2001 when Bamm and I were acting as technical leads and managers for a team of 12 analysts at Ball Aerospace & Technologies Corporation (BATC). I wrote a training course for them to take before serving as event analysts. I realized that there was no text that I could hand to a new analyst that taught them what I hoped they should know. I decided to as thoroughly as possible investigate many aspects of network security monitoring (NSM). When the book exceeded an 800 page count, my publisher said that I needed to stop. That’s why I quickly published a sequel, Extrusion Detection. I’ve likened Tao to the Constitution and Extrusion to the Bill of Rights! I remain very proud of Tao to this day -- especially the appendix on NSM intellectual history. That’s a timeless historical section that is relevant forever, regardless of what the Amazon.com reviewers might think. One of my favorite memories associated with the book involves the reaction of my co-workers. At the time I was working as a technical director at ManTech International Corporation, having joined that company after McAfee acquired Foundstone. At ManTech I worked with the offensive team and was also building a commercial NSM offering. Anton Chuvakin reviewed my book on Slashdot.org, which was a prominent technology site in the early 2000s. When Anton’s review appeared on the site, a crowd of my colleagues entered my office to congratulate me. Thanks ladies and gentlemen, and thanks for the review Anton!

TaoSecurity Visits the Pentagon Tuesday, April 19, 2005 This morning I was pleased to speak at the Pentagon on behalf of the Network Security Services-Pentagon section of the US Army Information Technology Agency. (I would like to provide a URL, but there's no point linking to sites that return "403.6 Forbidden: IP address rejected" errors!) Doug Steelman invited me to discuss network security monitoring at their Pentagon Security Forum. Last month Erik Birkholz and Steve Andres from Special Ops Security spoke on assessments. Next month Kevin Mandia of Red Cliff Consulting will discuss incident response. Doug and his colleague Mark Orlando were kind enough to take me on a tour of the building and share some of their approaches to detecting intrusions on the Pentagon's networks. While I will not outline specifics here, I will say I was impressed by the variety of network traffic the Pentagon collects. They are not a singlesolution shop that can be beaten by evading one variety of intrusion detection system deployed at the perimeter. Rather, they gather alert, session, and statistical data and have the capability to collect some full content data. I will not name tools, but I was surprised by some of their choices. By this I mean they seemed genuinely interested in novel approaches to identifying and validating security events. As far as the Pentagon network is concerned, they are literally an ISP in their own right. They have multiple Autonomous Systems (AS') and they connect to the DISA backbone with 100 Mbps ATM links. After September 11th 2001 they decided to reengineer their network to be more disasterresilient, and they are now deploying a MPLS-based routing design to facilitate this goal. I look forward to meeting and working with this team in the future, and I thank Doug and Mark for being great hosts today. https://taosecurity.blogspot.com/2005/04/taosecurity-visits-pentagonthis.html

Commentary I had worked at the Pentagon before while on active duty, but during this trip I was providing NSM consulting services as an independent small business owner with TaoSecurity LLC. My business consisted of three pillars. The first was NSM consulting, where I would visit organizations and help them build or transform their NSM operations. The second was NSM services. Several organizations hired me to instrument their networks and monitor them using open source tools like Sguil. Bamm helped me with this when I needed a break! The third was training. I gave my first public security briefings in 1999, and I had already given multi-day training in 2002 and 2003 for SANS and Foundstone. By 2004 I was giving independent multi-day training around the world. My big break came in 2007 when I started teaching my TCP/IP Weapons School classes. It was a two day class, and I taught two sessions back-to-back. I’d like to thank my friend from Foundstone, Steven Andres, for making that possible. He was the friendly gentleman running around the classroom helping students get their laptops and virtual machines working, at a time (the late 2000s and early 2010s) when such technology was shaky. I taught three iterations of that course (TWS, TWS 2, and TWS 3) over six years at Black Hat, probably reaching over 500 students. I taught it privately as well, including other NSM content at SANS, FIRST, and elsewhere. To this day I meet people who said they took one of those classes.

Security Responsibilities Thursday, January 18, 2007 It's been several years since I had operational responsibility for a single organization's network security operations. As a consultant I find myself helping many different customers, but I maintain continuous monitoring operations for only a few. Sometimes I wonder what it would be like to step back into a serious security role at a single organization. Are any of you looking for someone with my background? If yes, please feel free to email taosecurity [at] gmail [dot] com. Thank you. https://taosecurity.blogspot.com/2007/01/security-responsibilities.html Commentary This post made a huge impact on my life. I had been working as an independent contractor for a year and a half. My wife was staying home with our two kids, both under the age of 3. I was spending every waking moment trying to keep my business running. I was also getting tired of running from place to place, never seeing if what I recommended as a consultant was implemented or making a difference. I decided to post this entry to see if any blog readers might want to talk with someone with my skill set.

Bejtlich Joining General Electric as Director of Incident Response Monday, June 11, 2007 Two years ago this month I left my corporate job to focus on being an independent consultant through TaoSecurity. Today I am pleased to announce a new professional development. Starting next month I will be joining General Electric as Director of Incident Response, based near Manassas, VA, working for GE's Chief Information Security Officer, Grady Summers at GE HQ in Fairfield, CT. My new boss reads my blog and contacted me after reading my Security Responsibilities post five months ago. He has created the new Director position as a single corporate focal point for incident response, threat assessment, and ediscovery, working with GE's six business units and corporate HQ security staff. Grady reports to GE's Chief Technology Officer, Greg Simpson, and works closely with GE's Chief Security Officer, Brig Gen (USAF, ret) Frank Taylor. I will be building a team and I am pleased to have already met my first team member, a forensic investigator. I am very excited about this new job. First, the scope of the challenge is enormous. GE is probably just bigger than the Air Force (my closest related employer), with 350,000 users. The company's revenues last year exceeded $160 billion and its market capitalization currently exceeds $380 billion. GE is number 6 on the 2007 Fortune 500. In brief, I don't think there's a way for me to get bored working to address GE's digital security concerns. Second, I look forward to building and working with a team that has a defined, long-term objective. With few exceptions my consulting work has been short-duration engagements which don't allow me to develop security processes or implement products for the long term. I have been impressed by all of the security staff from GE I've met thus far, and encouraged by articles like Does GE Have the Best IT? and GE's repeated rank as the number one most admired company in America.

Third, I hope this new role will improve my family's quality of life. As an independent consultant I was constantly juggling marketing, public relations, business development, client relationships, accounting, invoicing, and other non-tech tasks while trying to deliver quality services to customers and stay current on threats, vulnerabilities, and assets. Knowing my new "customer" on a continuous basis means I can focus my energy on my corporate work and not consider every waking moment a reason to accomplish another TaoSecurity task. While the financial rewards of working independently probably exceeded those of working for a corporation, the personal cost of maintaining that business cycle is very high. I am also confident my travel requirements will be less for GE than they were for TaoSecurity. What does this mean for TaoSecurity? Simply put, I will not be accepting any new consulting work or private teaching requests that cannot be accomplished by the end of this month. I am currently fulfilling existing obligations, some of which may extend beyond the end of the month. I am not joining GE because my independent work dried up; in fact, I've had to turn down four large engagements within the last week because they would have to occur after the end of this month. If you're wondering about public training classes, I recommend you review my TaoSecurity training schedule. You'll see only the following are left: USENIX 2007: Network Security Monitoring with Open Source Tools and TCP/IP Weapons School Layers 2-3, 20-22 June 2007 GFIRST: Network Incident Response and Forensics, 25 June 2007 Black Hat USA: TCP/IP Weapons School, Black Hat Edition (layers 2-7 in two days), 28-29 and 30-31 July 2007 USENIX Security 2007: TCP/IP Weapons School Layers 4-7, 6-7 August 2007 Network Security Operations, Cincinnati: 21-23 August 2007 Network Security Operations, Chicago 28-30 August 2007

ForenSec Canada 2007: TCP/IP Weapons School, ForenSec Edition (layers 2-7 in two days), 15-16 September 2007 Virginia Alliance for Secure Computing and Networking: one day class, 19 October 2007 That's it. I do not have any plans to be teaching again, although I have not ruled out the occasional conference presentation. There will definitely not be any private classes, and I imagine the only public venue for a half-, full-, or two-day class would be USENIX or perhaps Black Hat Training next year, if either are interested. The bottom line is that if you want to take one of these classes before I no longer offer them, please sign up as soon as possible. What about writing here, or articles, or books? My boss supports my blogging and writing. I have never made a practice of posting "Look what I found at this client!" and he does not expect me to start doing so at GE. You can expect to read more about the sorts of techniques I'm using to address security concerns but never incident specifics or any information which would compromise my relationship with GE. The same goes for articles and books. I plan to continue writing the Snort Report and eventually write the new works listed on my books page. Finally, I should note that both of my grandfathers retired from GE, so I have some personal history with the company. I'd like to thank Grady Summers and everyone at GE that helped me join this great organization. https://taosecurity.blogspot.com/2007/06/bejtlich-joining-general-electricas.html Commentary As you can see, someone was reading that last post. Grady Summers, chief information security officer for General Electric, emailed me after seeing my January 2007 blog entry, asking if I might want to take a security architect role. I asked if GE had a company incident response team. When he said no, I suggested that I build it. That started an amazing four year journey that took GE-CIRT from 1 person to 44. Looking back on this post, it surprises me to read that I still had to deliver classes to eight events in order to fulfill my professional teaching obligations for the remainder of 2007!

Bejtlich Cited in Economist Thursday, December 04, 2008 I've been a subscriber of the Economist magazine since 1997. Although I have not been working to achieve this goal, I am happy to report that a personal ambition of mine has been reached today: I was cited in the 6 Dec 08 edition, in an article titled Cyberwarfare: Marching off to cyberwar. One way for governments to do this [to become resilient to cyber attack], says Richard Bejtlich, a former digital-security officer with the United States Air Force who now works at GE, an American conglomerate, might be to make greater use of open-source software, the underlying source code of which is available to anyone to inspect and improve. To those outside the field of computer security, and particularly to government types, the idea that such software can be more secure than code that is kept under lock and key can be difficult to accept. But from web-browsers to operating systems to encryption algorithms, the more people can scrutinise a piece of code, the more likely it is that its weak spots will be found and fixed. It may be that open-source defence is the best preparation for open-source attack. Thank you to Evgeny Morozov for including my comment and to the Economist editors for not cutting it. https://taosecurity.blogspot.com/2008/12/bejtlich-cited-in-economist.html Commentary I stopped subscribing to the Economist in 2017, when I felt their editorial policy degraded with their new editor. In 2008, however, I was quite the fan, so when a quote I made appeared in this issue, I was thrilled. 2008 was the first year my comments appeared in a large general-readership magazine or newspaper. That means it took me 10 years from my first technical hands-on role in the AFCERT to being mentioned in a big-time publication.

TaoSecurity Blog Wins Best NonTechnical Blog at RSA Friday, April 24, 2009 I noticed in Martin McKay's post Security Bloggers Meetup 2009 that TaoSecurity Blog (this blog, despite where you might be reading the reposted content) won the Best Non-Technical Blog award at the RSA 2009 Security Bloggers Meetup. Thank you for the votes! I was not aware that the blog was nominated nor did I mention the contest here. I appreciate the votes despite the posting slowdown while I was vacationing with my family and then teaching in Amsterdam. I have several posts planned for this weekend or soon thereafter! https://taosecurity.blogspot.com/2009/04/taosecurity-blog-wins-bestnon.html Commentary I was never really interested in awards or monetization for the blog. I never added advertisements and I did not want to be syndicated by other sites that were just aggregating content generated by others. I was pleased to win this award but I remember being a little miffed that I had been nominated in the “non-technical blog” category. I guess because I wasn’t posting shell code to my blog, it wasn’t technical enough. It’s ironic that I was not able to accept my award in person, because I was teaching TCP/IP Weapons School 2.0 at Black Hat Europe 2009 in Amsterdam. It must have been a thoroughly non-technical class where we waxed poetically about Shakespeare's Sonnets!

Inside a Congressional Hearing on Digital Threats Tuesday, October 04, 2011 Today I was fortunate to attend a hearing of the US House Permanent Select Committee on Intelligence (HPSCI). I sat behind my boss, Mandiant CEO Kevin Mandia. I'd like to share a few thoughts on the experience. First, I was impressed by the attitudes of all those involved with HPSCI, from the staffers to the Representatives themselves. They were all courteous and wanted to hear the opinions of Kevin and the other two witnesses (Art Coviello from RSA and Michael Hayden from the Chertoff Group), whether before, during, or after the hearing. Second, I thought Reps Mike Rogers (R-MI, HPSCI Chairman) and C.A. Dutch Ruppersberger (D-MD, HPSCI Ranking Member) offered compelling opening statements. Rep Rogers squarely pointed the finger at our overseas adversaries. As reported by PCWorld in “U.S. Lawmakers Point to China as Cause of Cyberattacks,” Rep Rogers said: "I don't believe that there is a precedent in history for such a massive and sustained intelligence effort by a government to blatantly steal commercial data and intellectual property... China's economic espionage has reached an intolerable level and I believe that the United States and our allies in Europe and Asia have an obligation to confront Beijing and demand that they put a stop to this piracy." You can watch all of Rep Rogers' statements on YouTube as Rep. Mike Rogers criticizes Chinese economic cyber-espionage (currently 21 views -let's increase that!) General Hayden reinforced Rep Rogers' sentiment with this quote: "As a professional intelligence officer, I step back in awe of the breadth,

the depth, the sophistication, the persistence of the Chinese espionage effort against the United States of America." Third, I was very pleased that this hearing was conducted in an open forum, and not behind closed doors. While I haven't found the whole hearing online or on TV yet (aside from Rep Rogers' statement and that of Rep Myrick (R-NC)), I encourage as much discussion as possible about this issue. One of General Hayden's points was that we are not having a debate about how to address digital threats because no one agrees what the facts are. If you work counter-intrusion operations every day, or participate in the intelligence community, you know what's happening. Outside that world, you likely think "APT" and the like are false concepts. We can really only build a national approach to countering the threat if enough people know what is happening. As more information becomes available I will likely publish it via my @taosecurity Twitter account. https://taosecurity.blogspot.com/2011/10/inside-congressional-hearingon-digital.html Commentary This is the first appearance in the book thus far of the China issue, then known as the advanced persistent threat (APT). This book devotes a whole chapter to the APT and China, so I will not address that aspect of the topic here. Rather, I wanted to highlight my first participation in the legislative process. I later testified 13 times to Congressional committees on my own, but this was my first time supporting such a hearing.

Become a Hunter Monday, December 05, 2011 Earlier this year SearchSecurity and TechTarget published a July-August 2011 issue with a focus on targeted threats. Prior to joining Mandiant as CSO I wrote an article for that issue called "Become a Hunter": “IT’S NATURAL FOR members of a technology-centric industry to see technology as the solution to security problems. In a field dominated by engineers, one can often perceive engineering methods as the answer to threats that try to steal, manipulate, or degrade information resources. Unfortunately, threats do not behave like forces of nature. No equation can govern a threat’s behavior, and threats routinely innovate in order to evade and disrupt defensive measures. Security and IT managers are slowly realizing that technology-centric defense is too easily defeated by threats of all types. Some modern defensive tools and techniques are effective against a subset of threats, but security pros in the trenches consider the ‘self-defending network’ concept to be marketing at best and counter-productive at worst. If technology and engineering aren’t the answer to security’s woes, then what is?” Download and read my article starting on page 19 for the answer! https://taosecurity.blogspot.com/2011/12/become-hunter.html Commentary Do you know the history of the term “threat hunting” in the cyber security community? This is one of, if not the first, unclassified written explanations of it. If you can find another, I’d like to see it and add it to the record. The incident handlers at General Electric -- Bamm Visscher, David Bianco, Ken Bradley, Tyler Hudak, Tim Crothers, and Aaron Wade -- developed the threat hunting mission for us in 2009-2010. I used the term that I had heard in Air Force circles, specifically the hunter-killer teams I heard were working north of me. See the post “The Origin of Threat Hunting” elsewhere in these

volumes for more!

TaoSecurity Blog Wins Most Educational Security Blog Saturday, March 03, 2012 I'm pleased to announce that TaoSecurity Blog won Most Educational Security Blog at the 2012 Social Security Bloggers Awards. I attended the event held near RSA and spent time talking with a lot of security bloggers and security people in general. I'd like to thank the sponsors of the event, depicted on the event T-shirt. Props to whoever designed the shirt -- it's one of my favorites. The award itself looks great, and the gift certificate to the Apple store will definitely help with an iPad 3, as intended! Long-time readers may remember that I won Best Non-Technical Blog at the same event in 2009. Winning this award has given me a little more motivation to blog this year. I admit that communicating via Twitter as @taosecurity is much more seductive due to the presence of followers and the immediate feedback! Speaking of Twitter, SC Magazine named @taosecurity as one of their 5 to follow, which I appreciate. And speaking of SC Magazine, they gave my company Mandiant their best security company award. https://taosecurity.blogspot.com/2012/03/taosecurity-blog-wins-mosteducational.html Commentary A few aspects of this post caught my attention. First, I appreciated winning the “most educational” category. If I can help anyone improve their understanding of security via the blog, so much the better. Second, I still have and use the 3rd generation iPad mentioned in the post. It runs a ridiculously

old version of iOS, and none of the apps can be updated, but it’s fine for watching YouTube videos. Third, I noticed that I was making excuses for not posting due to the “seduction” of Twitter. I had been using Twitter for several years at that point, and it was eroding my blogging practice. Last, I believe this was the last time for several years that I attended RSA. I had spoken there on “Traffic-Centric Incident Detection and Response” in 2006, and had sat on a few panels in 2011 and 2012. After that I stayed away for many years.

Bejtlich Books Explained Thursday, February 09, 2017 A reader asked me to explain the differences between two of my books. I decided to write a public response. If you visit the TaoSecurity Books page, you will see two different types of books. The first type involves books which list me as author or co-author. The second involves books to which I have contributed a chapter, section, or foreword. This post will only discuss books which list me as author or co-author. In July 2004 I published The Tao of Network Security Monitoring: Beyond Intrusion Detection. This book was the result of everything I had learned since 1997-98 regarding detecting and responding to intruders, primarily using network-centric means. It is the most complete examination of NSM philosophy available. I am particularly happy with the NSM history appendix. It cites and summarizes influential computer security papers over the four decade history of NSM to that point. The main problem with the Tao is that certain details of specific software versions are very outdated. Established software like Tcpdump, Argus, and Sguil function much the same way, and the core NSM data types remain timeless. You would not be able to use the Bro chapter with modern Bro versions, for example. Still, I recommend anyone serious about NSM read the Tao. The introduction describes the Tao using these words: Part I offers an introduction to Network Security Monitoring, an operational framework for the collection, analysis, and escalation of indications and warnings (I&W) to detect and respond to intrusions. Part I begins with an analysis of the terms and theory held by NSM practitioners. The first chapter discusses the security process and defines words like security, risk, and threat. It also makes assumptions about the intruder and

his prey that set the stage for NSM operations. The second chapter addresses NSM directly, explaining why NSM is not implemented by modern NIDS alone. The third chapter focuses on deployment considerations, such as how to access traffic using hubs, taps, SPAN ports, or inline devices. Part II begins an exploration of the NSM “product, process, people” triad. Chapter 4 is a case study called the “reference intrusion model.” This is an incident explained from the point of view of an omniscient observer. During this intrusion, the victim collected full content data in two locations. We will use those two trace files while explaining the tools discussed in Part II. Following the reference intrusion model, I devote chapters to each of the four types of data which must be collected to perform network security monitoring – full content, session, statistical, and alert data. Each chapter describes open source tools tested on the FreeBSD operating system and available on other UNIX derivatives. Part II also includes a look at tools to manipulate and modify traffic. Featured in Part II are little-discussed NIDS like Bro and Prelude, and the first true open source NSM suite, Sguil. Part III continues the NSM triad by discussing processes. If analysts don’t know how to handle events, they’re likely to ignore them. I provide best practices in one chapter, and follow with a second chapter explicitly for technical managers. That material explains how to conduct emergency NSM in an incident response scenario, how to evaluate monitoring vendors, and how to deploy a NSM architecture. Part IV is intended for analysts and their supervisors. Entry level and intermediate analysts frequently wonder how to move to the next level of their profession. I offer some guidance for the five topics with which a security professional should be proficient: weapons and tactics, telecommunications, system administration, scripting and programming, and management and policy. The next three chapters offer case studies, showing analysts how to apply NSM principles to intrusions and related scenarios. Part V is the offensive counterpart to the defensive aspects of Parts II, III, and IV. I discuss how to attack products, processes, and people. The first chapter examines tools to generate arbitrary packets, manipulate traffic, conduct reconnaissance, and exploit flaws in Cisco, Solaris, and Microsoft targets. In a second chapter I rely on my experience performing detection

and response to show how intruders attack the mindset and procedures upon which analysts rely. An epilogue on the future of NSM follows Part V. The appendices feature several TCP/IP protocol header charts and explanations. I also wrote an intellectual history of network security, with abstracts of some of the most important papers written during the last twenty-five years. Please take the time to at least skim this appendix, You'll see that many of the “revolutionary ideas” heralded in the press were in some cases proposed decades ago. The Tao lists at 832 pages. I planned to write 10 more chapters, but my publisher and I realized that we needed to get the Tao out the door. ("Real artists ship.") I wanted to address ways to watch traffic leaving the enterprise in order to identify intruders, rather than concentrating on inbound traffic, as was popular in the 1990s and 2000s. Therefore, I wrote Extrusion Detection: Security Monitoring for Internal Intrusions. I've called the Tao "the Constitution" and Extrusion "the Bill of Rights." These two books were written in 2004-2005, so they are tightly coupled in terms of language and methodology. Because Extrusion is tied more closely with data types, and less with specific software, I think it has aged better in this respect. The introduction describes Extrusion using these words: Part I mixes theory with architectural considerations. Chapter 1 is a recap of the major theories, tools, and techniques from The Tao. It is important for readers to understand that NSM has a specific technical meaning and that NSM is not the same process as intrusion detection. Chapter 2 describes the architectural requirements for designing a network best suited to control, detect, and respond to intrusions. Because this chapter is not written with specific tools in mind, security architects can implement their desired solutions regardless of the remainder of the book. Chapter 3 explains the theory of extrusion detection and sets the stage for the remainder of the book. Chapter 4 describes how to gain visibility to internal traffic. Part I concludes with Chapter 5, original material by Ken Meyers explaining how internal network design can enhance the control and detection of internal

threats. Part II is aimed at security analysts and operators; it is traffic-oriented and requires basic understanding of TCP/IP and packet analysis. Chapter 6 offers a method of dissecting session and full content data to unearth unauthorized activity. Chapter 7 offers guidance on responding to intrusions, from a network-centric perspective. Chapter 8 concludes part III by demonstrating principles of network forensics. Part III collects case studies of interest to all types of security professionals. Chapter 9 applies the lessons of Chapter 6 and explains how an internal bot net was discovered using Traffic Threat Assessment. Chapter 10 features analysis of IRC bot nets, contributed by LURHQ analyst Michael Heiser. An epilogue points to future developments. The first appendix, Appendix A, describes how to install Argus and NetFlow collection tools to capture session data. Appendix B explains how to install a minimal Snort deployment in an emergency. Appendix C, by Tenable Network Security founder Ron Gula, examines the variety of host and vulnerability enumeration techniques available in commercial and open source tools. The book concludes with Appendix D, where Red Cliff Consulting expert Rohyt Belani offers guidance on internal host enumeration using open source tools. At the same time I was writing Tao and Extrusion, I was collaborating with my friends and colleagues Keith Jones and Curtis Rose on a third book, Real Digital Forensics: Computer Security and Incident Response. This was a ground-breaking effort, published in October 2005. What made this book so interesting is that Keith, Curtis and I created workstations running live software, compromised each one, and then provided forensic evidence for readers on a companion DVD. This had never been done in book form, and after surviving the process we understood why! The legal issues alone were enough to almost make us abandon the effort. Microsoft did not want us to "distribute" a forensic image of a Windows system, so we had to zero out key Windows binaries to satisfy their lawyers.

The primary weakness of the book in 2017 is that operating systems have evolved, and many more forensics books have been written. It continues to be an interesting exercise to examine the forensic practices advocated by the book to see how they apply to more modern situations. This review of the book includes a summary of the contents: • Live incident response (collecting and analyzing volatile and nonvolatile data; 72 pp.) • Collecting and analyzing network-based data (live network surveillance and data analysis; 87 pp.) • Forensic duplication of various devices using commercial and open source tools (43 pp.) • Basic media analysis (deleted data recovery, metadata, hash analysis, “carving”/signature analysis, keyword searching, web browsing history, email, and registry analyses; 96 pp.) • Unknown tool/binary analysis (180 pp.) • Creating the “ultimate response CD” (response toolkit creation; 31 pp.) • Mobile device and removable media forensics (79 pp.) • On-line-based forensics (tracing emails and domain name ownership; 30 pp.) • Introduction to Perl scripting (12 pp.) After those three titles, I was done with writing for a while. However, in 2012 I taught a class for Black Hat in Abu Dhabi. I realized many of the students lacked the fundamental understanding of how networks operated and how network security monitoring could help them detect and respond to intrusions. I decided to write a book that would explain NSM from the ground up. While I assumed the reader would have familiarity with computing and some security concepts, I did not try to write the book for existing security experts. The result was The Practice of Network Security Monitoring: Understanding Incident Detection and Response. If you are new to NSM, this is the first book you should buy and read. It is a very popular title and it distills my philosophy and practice into the most digestible form, in 376 pages.

The main drawback of the book is the integration of Security Onion coverage. SO is a wonderful open source suite, partly because it is kept so current. That makes it difficult for a print book to track changes in the software installation and configuration options. While you can still use PNSM to install and use SO, you are better off relying on Doug Burks' excellent online documentation. PNSM is an awesome resource for learning how to use SO and other tools to detect and respond to intrusions. I am particularly pleased with chapter 9, on NSM operations. It is a joke that I often tell people to "read chapter 9" when anyone asks me about CIRTs. The introduction describes PNSM using these words: Part I, “Getting Started,” introduces NSM and how to think about sensor placement. • Chapter 1, “Network Security Monitoring Rationale,” explains why NSM matters, to help you gain the support needed to deploy NSM in your environment. • Chapter 2, “Collecting Network Traffic: Access, Storage, and Management,”addresses the challenges and solutions surrounding physical access to network traffic. Part II, “Security Onion Deployment,” focuses on installing SO on hardware and configuring SO effectively. • Chapter 3, “Stand-alone NSM Deployment and Installation,” introduces SO and explains how to install the software on spare hardware to gain initial NSM capability at low or no cost. • Chapter 4, “Distributed Deployment,” extends Chapter 3 to describe how to install a dispersed SO system. • Chapter 5, “SO Platform Housekeeping,” discusses maintenance activities for keeping your SO installation running smoothly. Part III, “Tools,” describes key software shipped with SO and how to use these applications.

• Chapter 6, “Command Line Packet Analysis Tools,” explains the key features of Tcpdump, Tshark, Dumpcap, and Argus in SO. • Chapter 7, “Graphical Packet Analysis Tools,” adds GUI-based software to the mix, describing Wireshark, Xplico, and NetworkMiner. • Chapter 8, “NSM Consoles,” shows how NSM suites, like Sguil, Squert, Snorby, and ELSA, enable detection and response workflows. Part IV, “NSM in Action,” discusses how to use NSM processes and data to detect and respond to intrusions. • Chapter 9, “NSM Operations,” shares my experience building and leading a global computer incident response team (CIRT). • Chapter 10, “Server-side Compromise,” is the first NSM case study, wherein you’ll learn how to apply NSM principles to identify and validate the compromise of an Internet-facing application. • Chapter 11, “Client-side Compromise,” is the second NSM case study, offering an example of a user being victimized by a client-side attack. • Chapter 12, “Extending SO,” concludes the main text with coverage of tools and techniques to expand SO’s capabilities. • Chapter 13, “Proxies and Checksums,” concludes the main text by addressing two challenges to conducting NSM. The Conclusion offers a few thoughts on the future of NSM, especially with respect to cloud environments. The Appendix, “SO Scripts and Configuration,” includes information from SO developer Doug Burks on core SO configuration files and control scripts. I hope this post helps explain the different books I've written, as well as their applicability to modern security scenarios. https://taosecurity.blogspot.com/2017/02/bejtlich-books-explained.html Commentary This post is self-explanatory, but I wanted to include it because it explains why I wrote each of my four main cyber security books, prior to this one of course.

Latest Book Inducted into Cybersecurity Canon Monday, May 08, 2017 Thursday evening Mrs B and I were pleased to attend an awards seminar for the Cybersecurity Canon. This is a project sponsored by Palo Alto Networks and led by Rick Howard. The goal is "identify a list of must-read books for all cybersecurity practitioners." Rick reviewed my fourth book The Practice of Network Security Monitoring in 2014 and someone nominated it for consideration in 2016. I was unaware earlier this year that my book was part of a 32-title "March Madness" style competition. My book won the five rounds, resulting in its conclusion in the 2017 inductee list! Thank you to all those that voted for my book. Ben Rothke interviewed me prior to the induction ceremony. We discussed some current trends in security and some lessons from the book. I hope to see that interview published by Palo Alto Networks and/or the Cybersecurity canon project in the near future. In my acceptance speech I explained how I wrote the book because I had not yet dedicated a book to my youngest daughter, since she was born after my third book was published. A teaching moment at Black Hat Abu Dhabi in December 2012 inspired me to write the book. While teaching network security monitoring, one of the students asked "but where do I install the .exe on the server?" I realized this student had no idea of physical access to a wire, or using a system to collect and store network traffic, or any of the other fundamental concepts inherent to NSM. He thought NSM was another magical software package to install on his domain controller. Thanks to the interpretation assistance of a local Arabic speaker, I was

able to get through to him. However, the experience convinced me that I needed to write a new book that built NSM from the ground up, hence the selection of topics and the order in which I presented them. While my book has not (yet?) been translated into Arabic, there are two Chinese language editions, a Korean edition, and a Polish edition! I also know of several SOCs who provide a copy of the book to all incoming analysts. The book is also a text in several college courses. I believe the book remains relevant for anyone who wants to learn the NSM methodology to detect and respond to intrusions. While network traffic is the example data source used in the book, the NSM methodology is data source agnostic. In 2002 Bamm Visscher and I defined NSM as "the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions." This definition makes no reference to network traffic. It is the collection-analysis-escalation framework that matters. You could perform NSM using log files, or host-centric data, or whatever else you use for indications and warning. I have no plans for another cybersecurity book. I am currently editing a book about combat mindset written by the head instructor of my Krav Maga style and his colleague. Palo Alto hosted a book signing and offered free books for attendees. I got a chance to speak with Steven Levy, whose book Hackers was also inducted. I sat next to him during the book signing, as shown in the picture at right. Thank you to Palo Alto Networks, Rick Howard, Ben Rothke, and my family for making inclusion in the Cybersecurity Canon possible. The awards dinner was a top-notch event. Mrs B and I enjoyed meeting a variety of people, including students in local cybersecurity degree programs. I closed my acceptance speech with the following from the end of the Old Testament, at the very end of 2nd Maccabees. It captures my goal when

writing books: "So I too will here end my story. If it is well told and to the point, that is what I myself desired; if it is poorly done and mediocre, that was the best I could do." If you'd like a copy of The Practice of Network Security Monitoring the best deal is to buy print and electronic editions from the publisher's Web site. Use code NSM101 to save 30%. I like having the print version for easy review, and I carry the digital copy on my tablet and phone. Thank you to everyone who voted and who also bought a copy of my book! Update: I forgot to thank Doug Burks, who created Security Onion, the software used to demonstrate NSM in the book. Doug also contributed the appendix explaining certain SO commands. Thank you Doug! Also thank you to Bill Pollack and his team at No Starch Press, who edited and published the book! https://taosecurity.blogspot.com/2017/05/latest-book-inducted-intocybersecurity.html Commentary Many thanks to Rick Howard and Ben Rothke for including me in this project. My favorite part of this post is the Bible quote, which I included in the afterword.

Twenty Years of Network Security Monitoring: From the AFCERT to Corelight Tuesday, September 11, 2018 I am really fired up to join Corelight. I’ve had to keep my involvement with the team a secret since officially starting on July 20th. Why was I so excited about this company? Let me step backwards to help explain my present situation, and forecast the future. Twenty years ago this month I joined the Air Force Computer Emergency Response Team (AFCERT) at then-Kelly Air Force Base, located in hot but lovely San Antonio, Texas. I was a brand new captain who thought he knew about computers and hacking based on experiences from my teenage years and more recent information operations and traditional intelligence work within the Air Intelligence Agency. I was desperate to join any part of the then-five-year-old Information Warfare Center (AFIWC) because I sensed it was the most exciting unit on “Security Hill.” I had misjudged my presumed level of “hacking” knowledge, but I was not mistaken about the exciting life of an AFCERT intrusion detector! I quickly learned the tenets of network security monitoring, enabled by the custom software watching and logging network traffic at every Air Force base. I soon heard there were three organizations that intruders knew to be wary of in the late 1990s: the Fort, i.e. the National Security Agency; the Air Force, thanks to our Automated Security Incident Measurement (ASIM) operation; and the University of California, Berkeley, because of a professor named Vern Paxson and his Bro network security monitoring software. When I wrote my first book in 2003-2004, The Tao of Network Security Monitoring, I enlisted the help of Christopher Jay Manders to write about Bro 0.8. Bro had the reputation of being very powerful but difficult to stand up. In 2007 I decided to try installing Bro myself, thanks to the introduction of the “brolite” scripts shipped with Bro 1.2.1. That made Bro easier to use, but I

didn’t do much analysis with it until I attended the 2009 Bro hands-on workshop. There I met Vern, Robin Sommer, Seth Hall, Christian Kreibich, and other Bro users and developers. I was lost most of the class, saved only by my knowledge of standard Unix command line tools like sed, awk, and grep! I was able to integrate Bro traffic analysis and logs into my TCP/IP Weapons School 2.0 class, and subsequent versions, which I taught mainly to Black Hat students. By the time I wrote my last book, The Practice of Network Security Monitoring, in 2013, I was heavily relying on Bro logs to demonstrate many sorts of network activity, thanks to the high-fidelity nature of Bro data. In July of this year, Seth Hall emailed to ask if I might be interested in keynoting the upcoming Bro users conference in Washington, D.C., on October 10-12. I was in a bad mood due to being unhappy with the job I had at that time, and I told him I was useless as a keynote speaker. I followed up with another message shortly after, explained my depressed mindset, and asked how he liked working at Corelight. That led to interviews with the Corelight team and a job offer. The opportunity to work with people who really understood the need for network security monitoring, and were writing the world’s most powerful software to generate NSM data, was so appealing! Now that I’m on the team, I can share how I view Corelight’s contribution to the security challenges we face. For me, Corelight solves the problems I encountered all those years ago when I first looked at Bro. The Corelight embodiment of Bro is ready to go when you deploy it. It’s developed and maintained by the people who write the code. Furthermore, Bro is front and center, not buried behind someone else’s logo. Why buy this amazing capability from another company when you can work with those who actually conceptualize, develop, and publish the code? It’s also not just Bro, but it’s Bro at ridiculous speeds, ingesting and making sense of complex network traffic. We regularly encounter open source Bro users who spend weeks or months struggling to get their open source deployments to run at the speeds they need, typically in the tens or hundreds of Gbps. Corelight’s offering is optimized at the hardware level to deliver the highest performance, and our team works with customers who

want to push Bro to the even greater levels. Finally, working at Corelight gives me the chance to take NSM in many exciting new directions. For years we NSM practitioners have worried about challenges to network-centric approaches, such as encryption, cloud environments, and alert fatigue. At Corelight we are working on answers for all of these, beyond the usual approaches — SSL termination, cloud gateways, and SIEM/SOAR solutions. We will have more to say about this in the future, I’m happy to say! What challenges do you hope Corelight can solve? Leave a comment or let me know via Twitter to @corelight_inc or @taosecurity. https://taosecurity.blogspot.com/2018/09/twenty-years-of-networksecurity.html Commentary This last post brings me to my current employer, as of the time of writing. Corelight is a great company and I am excited to see the offerings we provide to customers, especially the new features arriving this year.

Conclusion When I collected these posts, I had a theme of “milestones.” Having read them all and provided commentary, I can now see that they offer various glimpses into my thinking at each point, often from the perspective of changes in my employment status. Another category of events involved reaching beyond the security echo chamber into the larger world, via media like legislative hearings or popular print magazines. There’s a decent amount of history in this chapter as well, but for more feel free to read the separate “history” chapter in this volume.

Chapter 2. Philosophy and Strategy

Introduction This chapter begins the heavy lifting in the blog. I decided to include posts on my philosophy and strategy for security early in this volume, as it will color everything else that follows. My thinking in some ways has stayed constant over the years, and in other ways it has evolved.

Prevention Always Fails Monday, May 26, 2003 Network Magazine's May [2003] issue featured the article Emerging Technology: Detection vs. Prevention - Evolution or Revolution?. This is another case where a policy enforcement mechanism is confused with a policy audit and verification system. Policy enforcement mechanisms include firewalls, routers with access control lists, and so-called "intrusion prevention systems," which are simply layer 7 firewalls. Policy audit and verification systems include some traditional intrusion detection products, along with traffic collection systems like Argus and Sandstorm's NetIntercept. Is Marty Roesch the only high-profile person who understands this? From the article: "Gartner sees IPS as the next generation of IDS, when they're likely the next generation of firewall," says Marty Roesch, founder of Sourcefire, an IDS vendor. Roesch is also the creator of Snort, an opensource, rules-based language for writing detection signatures. Roesch insists that IDSs and IPSs are separate technologies with mutually exclusive functions. "IPS is access control, and IDS is network monitoring. IPS is policy enforcement, and IDS is audit. It's not the IDS's job to secure your network. Its job is to tell you how insecure it is." But Roesch's distinction may not resonate in the wider security market. "Joe Average doesn't want to monitor traffic and comb through data and make changes in rules and policies based on detected attacks," says Jeff Wilson, executive director of Infonetics Research (www.infonetics.com). "They want to stop attacks." Fine -- prevention is always preferable to detection. But prevention always fails, at some point. How do you determine the scope of a compromise when your IPS fails to detect and prevent an attack? You better

be able to fail back on your audit capabilities, which log what they see and make no value judgements. https://taosecurity.blogspot.com/2003/05/network-magazines-may-issuefeatured.html Commentary Marty Roesch did a wonderful job explaining the difference between an active, preventative device like an IPS or firewall, and a passive, audit device like a network security monitor. In this post I used the phrase “prevention always fails,” but I later amended it to read “prevention eventually fails.”

What is the Ultimate Security Solution? Monday, August 30, 2004 I received an email asking certain questions about digital security. Since the author said I could post my reply in my blog, here is an excerpt from his email: "I have read of many ways that hackers obtain access. But, I am uncertain what is comprehensive protection. Clearly, there are firewalls, anti-virus, anti-spyware, IDS, IPS, and many other three letter acronym tools available. I have read of your use/support for Sguil. Do you feel that is the ultimate solution? There are other tools out there like eEye Blink, Pivx Qwikfix, and Securecore type products. I like them, but am uncertain if they do an adequate job at providing security. And I really don't know which would be considered the best of these. So, I appeal to you for your insight. Would really appreciate any feedback - here or on your blog." This is an interesting question, because at least one reader of my recent Focus-IDS post thought I was a "detection-only" advocate. Since I believe protection eventually fails (I do believe that, and it's true), did I not also believe protection was worthless? Chapter 1 of my book lays out my philosophy on security, and Chapter 2 explains how I believe Network Security Monitoring meets the needs of my security philosophy. Anton Chuvakin's recent Slashdot review summarizes some of my thoughts. I recommend anyone interested in knowing how I define terms like security, risk, vulnerability, threat, and so forth, thumb through the first two chapters of my book in your local Borders or Barnes and Noble store. Regarding "ultimate solutions," I don't believe there is such a concept. I agree with Dr. Mitch Kabay that "security is a process, not an end state," and

with Bruce Schneier who says security is a process, not a product." On p. 4 of my book I define security as "the process of maintaining an acceptable level of perceived risk." No organization can be considered "secure" for any time beyond the last verification of adherence to its security policy. How does one best adhere to one's security policy? I believe the answer lies in following the security process, which consists of assessment, protection (prevention), detection, and response. Chapter 11 of my book presents best practices for each as they relate to implementing NSM. This means none of the products you mentioned (yes, even Sguil) can provide ultimate security. Even all of the best of breed products in the world deployed simultaneously cannot perfectly secure an organization. Focus on products ignores people and processes. All three elements must be brought to bear on the security problem. I clearly believe that network awareness is one of the keys to security. "Situational awareness" was drilled into my brain as a cadet at the US Air Force Academy, and for good reason. When one is ignorant of one's surroundings, it is impossible to discern the defensive landscape as well as any threats. I advocate NSM as a means to get real threat intelligence. I avoid taking a vulnerability-focused approach to security where possible. Remember that one of the best ways to prevent intrusions is to help put criminals behind bars by collecting evidence and supporting the prosecution of offenders. The only way to ensure a specific Internet-based threat never bothers your organization is to separate him from his keyboard! I recommend you and other others define your requirements before speaking to any vendor or researching any products. Decide what you believe is lacking in your security posture, and determine what combination of products, people, and processes could best meet your needs. Hire a professional security consultant to perform an assessment if you feel you lack the necessary expertise. Avoid consultants who run Nessus and drop a vulnerability report on your desk. Consult those who can offer solutions to problems or who can supervise the implementation of solutions by third parties. For your personal education you might find reading one or more of my recommended books helpful.

https://taosecurity.blogspot.com/2004/08/what-is-ultimate-securitysolution-i.html Commentary In this post I used the phrase “protection eventually fails,” which is disappointing. Get your phrases straight! I am pleased to see that I did not believe there was any technical “ultimate” solution for security, although in later years I remember stating that making the threat actor an ally was one of the so-called “ultimate” solutions. This is how we changed the situation between the US and our former World War II adversaries Japan and Germany, for example.

Thoughts on Digital Crime Friday, September 24, 2004 Last week I spoke at and attended the High Technology Crime Investigation Association International Conference and Expo 2004. The keynote speaker was US Attorney General John Ashcroft. Although I spent time furiously copying notes on his speech, the text is online. Not printed in that text was the AG's repeated theme: the US Department of Justice and Federal Bureau of Investigation are committed to "protecting lives and liberty." I thought this was a curious stance given the recent efforts to scale back the Patriot Act. The AG mentioned that "protect[ing] the United States against cyber-based attacks and high-technology crimes" is the number 3 FBI priority. I believe that if you are a low- to mid-skilled intruder physically located in the United States, you will eventually be caught. The days when hardly anyone cared about prosecuting digital crime are ending. The FBI has 13 Computer Hacking and Intellectual Property (CHIPS) units with plans to open more. The Computer Crime and Intellectual Property Section (CCIPS) are available to US Attorneys across the country. The Secret Service operates 15 Electronic Crimes Task Forces. There are 5 Regional Computer Forensic Laboratories operating now with 8 planned to open in the coming years. The Internet Fraud Complaint Center (IFCC) is taking reports from victims of cyber crime and the National White Collar Crime Center supports law enforcement efforts. All of this adds up to a lot of federal, state, and local police working to bust bad guys. https://taosecurity.blogspot.com/2004/09/thoughts-on-digital-crime-lastweek-i.html Commentary

Here I wrote “I believe that if you are a low- to mid-skilled intruder physically located in the United States, you will eventually be caught.” I believed it then and I still believe it today. That is what I tell anyone I meet who even hints at conducting intrusion activity in the US.

Further Musings on Digital Crime Saturday, September 25, 2004 Adam Shostack posted a response to my Thoughts on Digital Crime blog entry. Essentially he questions the "bandwidth" of the law enforcement organizations I listed, i.e., their ability to handle cases. The FBI CART Web page says "in 1999 the Unit conducted 2,400 examinations of computer evidence." At HTCIA I heard Mr. Kosiba state that thus far, in 2004, CART has worked 2,500 cases, which may involve more than one examination per case. The 50+ CART examiners and support personnel and 250 field examiners have processed 665 TB of data so far this year! The CART alone spends $32,000 per examiner on equipment when they are hired, and another $12,500 per year to upgrade each examiner's equipment. This is a sign that the DoJ is pouring money into combating cyber crime. Of course local and state police do not have the same resources, but especially at the state level we are seeing improvements. If more resources are being plowed into cybercrime, what is the likelihood that law enforcement will decline from prosecuting juveniles? I believe being a teenager isn't a viable way to escape prosecution either. During HTCIA I attended a talk by Rick Aldrich, former AFOSI legal advisor. He explained how it has been traditionally difficult to prosecute juvenile offenders in federal court. The state of California, however, has a special unit set up to investigate and prosecute juvenile cybercriminals. Other states who identify underage intruders now look for ways to get California to prosecute these offenders, due to California's system. The last way to avoid a trip to the pokey is to hack from overseas locations. A visit to Cybercrime.gov shows plenty of active prosecutions for "hacking," including some foreigners. It's true that the people least likely to be prosecuted are those who physically reside in a country whose law enforcement agencies dislike working with the US government.

However, even a country like Romania is working to catch intruders. I still believe all of this does not bode well for low- to mid-level cyber criminals -- you will be caught. Justice may be slow but it does not appear to give up. I have one caveat -- there must be evidence to support a prosecution. If a victim doesn't collect the sorts of high-fidelity data which can show damage and link it to the intruder's action, it's difficult to attract law enforcement's interest. https://taosecurity.blogspot.com/2004/09/further-musings-on-digitalcrime-adam.html Commentary Although NSM for law enforcement has never been a primary driver for me, NSM can certainly enable law enforcement. Collecting evidence in a manner that can survive adversarial legal scrutiny requires elevating your NSM game to another level, but it’s not impossible. Of course, if you don’t collect NSM data of any type, as is often the case, I don’t want to hear your arguments saying why NSM won’t help in a legal case. “Mejor que nada” applies to this situation quite well.

How to Misuse an Intrusion Detection System Wednesday, July 13, 2005 I was dismayed to see the following thread in the bleeding-sigs mailing list recently. Essentially someone suggested using PCRE to look for this content on Web pages and email: (jihad |al Qaida|allah|destroy|kill americans|death|attack|infidels) (washington|london|new york) Here is part of my reply to the Bleeding-Sigs thread. These rules are completely inappropriate. First, there is no digital security aspect of these rules, so the "provider exception" of the wiretap act is likely nullified. Without obtaining consent from the end users (and thereby protection under the "consent exception"), that means the IDS is conducting a wiretap. The administrator could go to jail, or at least expose himself and his organization to a lawsuit from an intercepted party. Second, the manner in which most people deploy Snort would not yield much insight regarding why these rules triggered. At best a normal Snort user would get a packet containing content that caused Snort to alert. That might be enough to determine no real "terrorism" is involved, but it might also be enough to begin an "investigation" that stands on dubious grounds due to my first point. Third, does anyone think real terrorists use any of the words listed in the rules? If anyone does, they have no experience with the counterterrorism world. An IDS should be used to provide indicators of security incidents.

Otherwise, it becomes difficult to justify its operation, legally and ethically. Unfortunately, I saw both rules (at least commented out) in the latest bleeding ruleset. What do you think? https://taosecurity.blogspot.com/2005/07/how-to-misuse-intrusiondetection.html Commentary Periodically someone gets the bright idea to use their NSM data for silly reasons like this. My answer explains why it’s a bad idea, and it still applies today.

Soccer Goal Security Sunday, August 07, 2005 [This post originally ran with an advertisement showing a soccer goalie focused intently on defending the right side of his net, while an opposing player wearing number 9 kicks the ball into the left side of the net.] I found this ad in Network Computing magazine. It did not address a security concern, but I thought the image was priceless. I see the goalie as representing most preventative security countermeasures. Player 9 is the threat. The soccer ball is an exploit. They are attacking an enterprise, represented by the soccer net. The goalie is addressing the threat he expects, namely someone trying to score from the side of the net he is defending. In many cases the goalie is "fighting the last war;" perhaps the last time he was scored upon came from the side he now defends? The threat is smart and unpredictable, attacking a different part of the net. The net itself (the enterprise) is huge. Not only is the front of the net open, the net itself is riddled with holes. A particularly clever attacker might see his objective as getting the ball in the net using any means necessary. That might include cutting the ball into smaller pieces and sending the fragments through holes in the net. Another attacker might dig his way under the goal and send the ball up through a tunnel. Yet another attacker might wait for the goalie to get tired, or drop his guard, or lose his vision at night. A really vicious threat would attack the goalie himself. Network security monitoring is the device that captured this photo. We might collect indicators of any of the previously mentioned attacks. A traditional IDS or IPS might alert or try to block attacks (goals) passing from outside the front of the net to inside the front of the net. NSM data might reveal vibrations from tunneling under the goal, or small pieces or soccer ball being infiltrated through the back. Perhaps the goal itself is slightly raised in the back and the ball is just pushed under!

I would prefer to see a version showing an ice hockey goalie, but I would have to stage and photograph that myself. Apologies to my friends across the pond who call this "football." https://taosecurity.blogspot.com/2005/08/soccer-goal-security-i-foundthis-ad.html Commentary I referred to this “soccer goal security” concept many times in the 2000s. The idea was that defenders fixate on what they perceive to be a problem, while intruders exploit another avenue. You need data to bridge that gap, not theories. The situation has changed somewhat, thanks to the rise of better network and endpoint instrumentation. The question is whether there are analysts skilled and motivated enough to interpret and act upon that data.

Further Thoughts on Engineering Disasters Saturday, October 22, 2005 My TiVo managed to save a few more episodes of Modern Marvels. You may remember I discussed engineering disasters last month. This episode of the show of the same title took a broader look at the problem. Three experts provided comments that resonated with me. First, Dr. Roger McCarthy of Exponent, Inc. offered the following story about problems with the Hubble Space Telescope. When Hubble was built on earth, engineers did not sufficiently address issues with the weight of the lens on Earth and deflections caused by gravity. When Hubble was put in orbit, the lens no longer deflected and as a result it was not the proper shape. Engineers on Earth had never tested the lens because they could not figure out a way to do it. So, they launched and hoped for the best -- only to encounter a disaster that required a $50 million orbital repair mission. Dr. McCarthy's comment was "A single test is worth a thousand expert opinions." This is an example of management by fact instead of management by belief, mentioned previously on this blog. Second, Dr. Charles Perrow, author of Normal Accidents: Living With High-Risk Technologies, explained the makings of a disaster. Essentially, he said disasters are caused by the unforeseen consequences of multiple, individually non-devastating, failures in complex systems. Most catastrophes could be prevented if any one of the small failures had not occurred. Third, Mary Schiavo commented on the Challenger disaster. She described the well-known problems with operating the Shuttle's rocket O-

rings in temperatures below 53 degrees F. The Shuttle had launched at lower temperatures prior to the Challenger explosion, but NASA knew they were risking catastrophe. Ms. Schiavo said NASA engineers begged their managers not to let Challenger launch, seeing that chunks of ice covered the launch pad and Shuttle. They were overruled and disaster occurred. This struck a chord with me, because a few days earlier I read a new story in Time about how Steve Jobs gets Apple to bring innovative products to market: Apple CEO Steve Jobs [will] tell you an instructive little story. Call it the Parable of the Concept Car. "Here's what you find at a lot of companies," he says, kicking back in a conference room at Apple's gleaming white Silicon Valley headquarters, which looks something like a cross between an Ivy League university and an iPod. "You know how you see a show car, and it's really cool, and then four years later you see the production car, and it sucks? And you go, What happened? They had it! They had it in the palm of their hands! They grabbed defeat from the jaws of victory! "What happened was, the designers came up with this really great idea. Then they take it to the engineers, and the engineers go, 'Nah, we can't do that. That's impossible.' And so it gets a lot worse. Then they take it to the manufacturing people, and they go, 'We can't build that!' And it gets a lot worse." When Jobs took up his present position at Apple in 1997, that's the situation he found. He and Jonathan Ive, head of design, came up with the original iMac, a candy-colored computer merged with a cathode-ray tube that, at the time, looked like nothing anybody had seen outside of a Jetsons cartoon. "Sure enough," Jobs recalls, "when we took it to the engineers, they said, 'Oh.' And they came up with 38 reasons. And I said, 'No, no, we're doing this.' And they said, 'Well, why?' And I said, 'Because I'm the CEO, and I think it can be done.'" Would Steve Jobs have overruled the NASA engineers and launched Challenger? Who knows.

From what I have learned, disasters are prone to happen in complex, tightly-coupled systems. The only way to try to avoid them is to test and monitor their operation, exercise response, and then implement those plans when catastrophe occurs. Anything less is like launching a defective, untested Hubble and hoping for the best, and then paying through the nose to clean up the mess. Here are a few footnotes to this post. Dr. McCarthy's company offers security engineering services, including services for information systems. They are described thus: "We have assembled one of the largest private collections of computerized accident and incident data in the world. Our web-based solutions put this information at your disposal, giving you comprehensive risk data quickly and at low cost." Dr. McCarthy was recently elected to the National Academy of Engineering, which has a Computer Science and Telecommunications Board with a Improving Cybersecurity Research in the United States project. My research for this story also led me to the System Safety Society. https://taosecurity.blogspot.com/2005/10/further-thoughts-onengineering.html Commentary There’s a lot packed into this post. This is the first reference in this book to my interest in engineering. As will be shown in posts that come later, I am a fan of engineering, but I do not believe engineering, or science for that matter, have all the answers to security problems. The reason is that the “opponent” in engineering and science are the laws of nature, while the opponent in security, or criminal or military matters, is an intelligent, adaptive adversary. The comments on the makings of a disaster, highlighted in bold, unfortunately apply whether one is facing the laws of nature or a sentient opponent.

More on Engineering Disasters and Bird Flu Monday, October 24, 2005 Here's another anecdote from the Engineering Disasters story I wrote about recently. In 1956 the cruise ship Andrea Doria was struck and sunk by the ocean liner Stockholm. At that time radar was still a fairly new innovation on sea vessels. Ship bridges were dimly lit, and the controls on radar systems were not illuminated. It is possible that the Stockholm radar operators misinterpreted the readings on their equipment, believing the Andrea Doria was 12 miles away when it was really 2 miles away. The ships literally turned towards one another on a collision course, based on faulty interpretation of radar contact in the dense fog. Catastrophe ensued. This disaster shows how humans can never be removed from the equation, and they are often at center stage when failures occur. The commentator on the show said a 10 cent light bulb illuminating the radar controls station could have shown the radar range was positioned in a setting different from that assumed by the operator. Following the Andrea Doria collision, illumination was added to ship radar controls. This story reminded me that the latest security technology is worthless -- or even worse, damaging -- in the hands of people who are not trained or able to use it properly. On a different subject, I heard an interview on NPR with Health and Human Services Secretary Mike Leavitt about bird flu. He likened the situation to "surveillance" of a dry forest during fire season. He said that the best defense was vigilance and rapid response. His analogy assumed being nearby when a small fire erupts. First responders who are quickly on the scene can stamp out a fire before it becomes uncontrollable. If the response

team is unaware of the fire, it can spread and then be beyond containment. He concluded the interview saying "ultimately, another pandemic will come. Right now we are not prepared." I thought his comments applied well to digital security incidents. NSM is surveillance, and incident response helps stamp out fires (or bird flu outbreaks) quickly before they exceed an organization's capacity to deal with them. Is your organization ready? If you want to know, TaoSecurity provides services like incident response training and CSIRT assessments and evaluations. https://taosecurity.blogspot.com/2005/10/more-on-engineering-disastersand-bird.html Commentary This post reminded me that before the rise of reality TV in the woods, or on the ice, or in the mountains, or at sea, or wherever else producers film their manufactured dramas, cable TV produced some informative programs! I published this post when I was working as a solo consultant for TaoSecurity LLC, trying to attract attention for my services as well as sharing my thoughts on the security scene.

Thoughts on Patching Thursday, April 27, 2006 As I continue through my list of security notes, I thought I would share a few ideas here. I recorded these while seeing Ron Gula discuss vulnerability management at RMISC. Many people recommend automated patching, at least for desktops. In the enterprise, some people believe patches should be tested prior to rollout. This sounds like automated patching must be disabled. I'm wondering if anyone has implemented delayed automated patching. In other words, automatic updates are enabled, but with a two or three day delay. Those two or three days give the enterprise security group time to test the patch. If everything is ok, they let the automated patch proceed. If the patch breaks something critical, they instruct the desktops to not install the patch until further orders. I think this approach strikes a good balance since I would prefer to have automated patch installation be the default tactic, not manual installation. Determining which systems are vulnerable results in imagining a continuum of assessment tactics. At the most unobtrusive level we have a "paper review" of an inventory of systems and their reported patch levels. Next comes passive assessment of traffic to and from clients and servers. Traditional vulnerability scanning, without logging in to the target, is the next least obtrusive way to assess hosts. Logging in to a host with credentials is another option. Installing an agent on the host is a medium-impact approach. Exploiting the host is the final way to definitively see if a host is

vulnerable. On a related note, Ron mentioned that the costs of demonstrating compliance far exceed those of maintaining compliance. This is sad. Ron also noted he believes auditors should work for the CFO and not the CIO. I agree. https://taosecurity.blogspot.com/2006/04/thoughts-on-patching-as-icontinue.html Commentary This post could have been written today. We have not “solved” the “patching problem” because it is a so-called “wicked problem.”

Why Prevention Can Never Completely Replace Detection Thursday, April 27, 2006 So-called intrusion prevention systems (IPS) are all the rage. Since the 2003 Gartner report declaring intrusion detection systems (IDS) dead, the IPS has been seen as the "natural evolution" of IDS technology. If you can detect an attack, goes a popular line of reasoning, why can't (or shouldn't) you stop it? Here are a few thoughts on this issue. People who make this argument assume that prevention is an activity with zero cost or down side. The reality is that the prevention action might just as easily stop legitimate traffic. Someone has to decide what level of interruption is acceptable. For many enterprises -- especially those where interruption equals lost revenue -- IPS is a non-starter. (Shoot, I've dealt with companies that tolerated known intrusions for years because they didn't want to "impact" the network!) If you're not allowed to interrupt traffic, what is the remaining course of action? The answer is inspection, followed by manual analysis and response. If a human decides the problem is severe enough to warrant interruption, then a preventative measure is deployed. In some places, prevention is too difficult or costly. I would like to know how one could use a network-based control mechanism to stop a host A on switch X from exploiting host B on switch X. Unless the switch itself enforces security controls, there is no way to prevent this activity. However, a sensor on switch X's SPAN port could detect and report this malicious activity. Note that I think we will see this sort of access control move into switches. It's another question whether anyone will activate these features. I think traffic inspection is best used at boundaries between trusted systems. Enforcement systems make sense at boundaries between trusted and

untrusted systems. Note that if you don't trust individual hosts inside your organization (for whatever reason), you should enforce control on a per-host basis within the access switch. https://taosecurity.blogspot.com/2006/04/why-prevention-can-nevercompletely.html Commentary Two posts in one day! When you make your own schedule, you have time to write like this. This post still applies today. Just replace “IPS” with whatever network-based, active, preventative system is popular.

Analog Security is Threat-Centric Thursday, April 27, 2006 If you were to pass a dark alley, I doubt you would want to enter it. You could imagine all sorts of nasty encounters that might deprive you of property, limb, or life. Yet, few people can imagine the sorts of danger they encounter when using a public PC terminal, or connecting to a wireless access point, or visiting a malicious Web site with a vulnerable browser. This is the problem with envisaging risk that I discussed earlier this week. Furthermore, security in the analog world is much threat-centric. If I'm walking near or in a dark alley, and I see a shady character, I sense risk. I don't walk down the street checking myself for vulnerabilities, ignoring the threats watching me. ("Exposed neck? Could get hurt there. Bare hands? Might get burnt by acid." Etc...) It seems like the digital security model is like an unarmed combatant in a war zone. Survivability is determined solely by vulnerability exposure, the attractiveness of one's assets to a threat, and any countermeasures that might disrupt threats. In the analog world, one can employ a variety of tactics to improve survivability. Avoiding risky areas is the easiest, but let's assume one has to enter dangerous locations. A potential victim could arm himself, either using a weapon or martial arts. He could travel in groups, hire a bodyguard, or enlist the police's aid. The term "hack-back" crops up in the digital scenario. This is really not a useful approach, because hacking the system attacking you does absolutely nothing to address the real threat -- the criminal at the keyboard. In the analog world, consider the consequences for "hacking back." If you shoot an assailant, you'll have to explain yourself to the police or potentially a court of law. You probably can't shoot someone for simply being on your property, but you can if they threaten or try to harm you.

On a related note, we need some means to estimate threat level in a systematic, repeatable manner. When I say "threat" I mean threat, not vulnerability. Something like a system of distributed honeypots with distinct configurations might be helpful. Time-to-exploit for a given patch set might be tracked. I know the Honeynet Project periodically issues reports on how long it takes to 0wn a box, but it might be neat to see this in a regular, formal manner. https://taosecurity.blogspot.com/2006/04/analog-security-is-threatcentric-if.html Commentary Wait, three posts in one day? This was one of many posts where I argued about the differences and similarities between a world we intuitively understand, the physical world, and one where our intuition tends to break down, the virtual world. One can develop an intuition for the virtual world, just as one can develop an intuition for seemingly illogical concepts like quantum theory, but it takes training and experience.

Control-Compliant vs Field-Assessed Security Friday, July 07, 2006 Last month's ISSA-NoVA meeting featured Dennis Heretick, CISO of the US Department of Justice. Mr. Heretick seemed like a sincere, devoted government employee, so I hope no one interprets the following remarks as a personal attack. Instead, I'd like to comment on the security mindset prevalent in the US government. Mr. Heretick's talk sharpened my thoughts on this matter. Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the north-west be on the starting line-up? All of this seems perfectly rational to this team. An outsider looks at the situation and says: "Check the scoreboard! You're down 42-7 and you have a 1-6 record. You guys are losers!" In my opinion, this summarizes the mindset of US government information security managers. Here are some examples from Mr. Heretick's talk. He showed a "dashboard" with various "metrics" that supposedly indicate improved DoJ security. The dashboard listed items like: ●

IRP Reporting: meaning Incident Response Plan reporting, i.e., does the DoJ unit have an incident response plan? This says nothing about the quality of the IRP. ● IRP Exercised: has the DoJ unit exercised its IRP? This says

nothing about the effectiveness of the IRT in the exercise. ● CP Developed: meaning Contingency Plan developed, i.e, does the DoJ unit have a contingency plan should disaster strike? This also says nothing about the quality of the CP. ● CP Exercised: has the DoJ unit exercised its CP? Same story as the IRP. Imagine a dashboard, then, with all "green" for these items. They say absolutely nothing about the "score of the game." How should the score be measured then? Here are a few ideas, which are neither mutually exclusive nor exceedingly well-thought-out: ●

Days since last compromise of type X: This is similar to a manufacturing plant's "days since an accident" report or a highway's "days since a fatality" report. For some sites this number may stay zero if the organization is always compromised. The higher the number, the better. ● System-days compromised: This looks at the number of systems compromised, and for how many days, during a specified period. The lower, the better. ● Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge: This is from my earlier penetration testing story. These are just a few ideas, but the common theme is they relate to the actual question management should care about: are we compromised, and how easy is it for us to be compromised? I explained my football analogy to Mr. Heretick and asked if he would adopt it. He replied that my metrics would discourage DoJ units from reporting incidents, and that reporting incidents was more important to him than anything else. This is ridiculous, and it indicates to me that organizations like this (and probably the whole government) need independent, Inspector General-style units that roam freely to assess networks and discover intruders.

In short, the style of "security" advocated by government managers seems to be "control-compliant." I prefer "field-assessed" security, although I would be happy to replace that term with something more descriptive. In the latest SANS NewsBites (link will work shortly) Alan Paller used the term "attackbased metrics," saying the following about the VA laptop fiasco: "if the VA security policies are imprecise and untestable, if the VA doesn't monitor attack-based metrics, and if there are no repercussions for employees who ignore the important policies, then this move [giving authority to CISOs] will have no impact at all." PS: Mr. Heretick shared an interesting risk equation model. He uses the following to measure risk. ●

Vulnerability is measured by assessing exploitability (0-5), along with countermeasure effectiveness (0-2). Total vulnerability is exploitability minus countermeasures. ● Threat is measured by assessing capability (1-2), history (1-2), gain (1-2), attributability (1-2), and detectability (1-2). Total threat is capability plus history plus gain minus attributability minus detectability. ● Significance (i.e., impact or cost) is measured by assessing loss of life (0 or 4), sensitivity (0 or 4), operational impact (0 or 2), and equipment loss (0 or 2). Total significance is loss plus op impact plus sensitivity plus equipment loss. Total risk is vulnerability times threat times significance, with < 6 very low, 6-18 low, 19-54 medium, 55-75 high, and >75 very high. https://taosecurity.blogspot.com/2006/07/control-compliant-vs-fieldassessed.html Commentary This is one of my all-time favorite posts because it captures one of my key ideas: evaluating security posture by observing outcomes, rather than theorizing about security posture by measuring inputs. My genius for marketing shines in this post, as I call the first idea “field-assessed” and the second “control-compliant.” Yikes.

Of Course Insiders Cause Fewer Security Incidents Tuesday, July 11, 2006 Today's SANS NewsBites points to this eWeek article, which in turn summarizes this Computer Associates press release. It claims "more than 84% [of survey respondents] experienced a security incident over the past 12 months and that the number of breaches continues to rise." The SANS editor piqued my interest with this comment: "(Honan): It is interesting to note that this survey highlights the external threat is becoming more prevalent than the internal one." (emphasis added) "Becoming more prevalent?" This is Mr. Honan's answer to this part of the CA story: "Of the organizations which experienced a security breach, 38% suffered an internal breach of security." That means 62% experienced an external breach, or perhaps less if one could not determine the source of the breach. I highlight "becoming more prevalent" because it indicates the speaker (like countless others) fell for the "80% myth," which is a statement claiming that 80% of all security incidents are caused by insiders. I document in Tao the history of this myth. I challenge anyone who believes the 80% myth to trace it back to some definitive source. If you do you will find it leads nowhere reputable. If the 80% myth were true, security would be a fairly easy problem to solve. The biggest problem I see with modern digital security is the inability to remove threats from the risk equation. In other words, victims of security incidents lack the personal power to eliminate threats; only the police or military can really remove threats from the picture. Since the police are illequipped and overwhelmed, and the military is similarly not well-positioned

to eliminate threats, attackers continue to assault with impunity. However, if the majority (the vast majority, if you believe the 80% myth) of threats are internal, this completely changes the situation. To immediately and irrevocably alter the risk equation, all an employer or organization needs to do is identify and fire or remove the internal bad apples. Problem solved. "Oh, that's too hard," I'm going to hear. Maybe, but compare that option (which happens every day) to identifying, apprehending, prosecuting, and jailing a Romanian. Since organizations have the tools to largely remove the insider threat, but security incidents continue to be a problem, insiders must be dwarfed by the size of the outsider threat community. However, as I've said elsewhere, insiders will always be better informed and positioned to cause the most damage to their victims. They know where to hurt, how to hurt, and may already have all the access they need to hurt, their victim. The bottom line is that the number of external attackers far exceeds the number of internal attackers. https://taosecurity.blogspot.com/2006/07/of-course-insiders-causefewer.html Commentary First, apologizes to any Romainians reading this post. I had some personal experience investigating Romainian criminal actors so I was biased back then. Sorry! Second, this post highlights one of the many myths that periodically surface in the security ecosystem. For the truth about the so-called “80%” myth about insiders, please see my first book.

National Digital Security Board Monday, August 21, 2006 While reading Hacker's Challenge 3, I was reminded of some of my earlier thoughts on digital security disasters. I wrote: My concept is simple: when a bridge fails in the "analog" world, everyone knows about it. The disaster is visible, and engineers can analyze and learn from the event. The lessons they take away make future bridges stronger and safer. I do not see this happening in the digital world. When I wrote that post I requested hearing stories from blog readers on their own security disasters. I received zero stories. I was naive to think anyone would want to talk about this issue, unless in a forum like Hacker's Challenge. At least there the authors receive royalties and fame, however meager. While watching a recent Nova episode on the Concorde, it mentioned a terrible crash which occurred in 2000. It occurred to me that if this crash affected an American airline, the National Transportation Safety Board would be involved. The NTSB Web site says: The National Transportation Safety Board is an independent Federal agency charged by Congress with investigating every civil aviation accident in the United States and significant accidents in the other modes of transportation -- railroad, highway, marine and pipeline -- and issuing safety recommendations aimed at preventing future accidents... Since its inception in 1967, the NTSB has investigated more than 124,000 aviation accidents and over 10,000 surface transportation accidents. In so doing, it has become one of the world's premier accident investigation agencies. On call 24 hours a day, 365 days a year, NTSB investigators travel throughout the country and to every corner of the world to investigate significant accidents and develop factual records

and safety recommendations. This is exactly what we need in digital security. Not the NTSB, but the NDSB -- the National Digital Security Board. The NDSB should investigate intrusions disclosed by companies as a result of existing legislation. Like the NTSB, the NDSB would probably need legislation to authorize these investigations. An Amazon.com search found Safety in the Skies: Personnel and Parties in NTSB Aviation Accident Investigations, which I happened to find online as well. Early on it states: The NTSB bears a significant share of the responsibility for ensuring the safety of domestic and international air travel. Although it is not a regulatory agency, the NTSB's influence weighs heavily when matters of transportation safety are at issue. The NTSB is independent from every other Executive Branch department or agency, and its mission is simple and straightforward: to investigate and establish the facts, circumstances, and the cause or probable cause of various kinds of major transportation accidents. The safety board is also charged with making safety recommendations to federal, state, and local agencies to prevent similar accidents from happening in the future. This responsibility is fundamental to ensuring that unsafe conditions are identified and that appropriate corrective action is taken as soon as possible. However, the safety board has no enforcement authority other than the persuasive power of its investigations and the immediacy of its recommendations. In the scheme of government, the agency's clout is unique but is contingent on the independence, timeliness, and accuracy of its factual findings and analytical conclusions. I intend to research this issue further and perhaps write more formally about this idea. Any NTSB people reading this blog?

I also think we should have a United States Cyber Corps, but that's another story… https://taosecurity.blogspot.com/2006/08/national-digital-securityboard.html Commentary I wrote that “The NDSB should investigate intrusions disclosed by companies as a result of existing legislation.” The problem we faced back in 2006 was that we needed details on attacker methods in order to better design, deploy, and operate defenses. I don’t think that is the case today, thanks to the rise of the threat intelligence industry and the codification of frameworks like MITRE ATT&CK®. This post shows that certain concepts have long legs, however. In 2019 Rob Knake contacted me about the idea, and I participated in some events associated with creating a modern version. I also noted that the end of the post called for a “United States Cyber Corps,” which I later referenced as a “US Cyber Force.”

Security Is Not Refrigeration Saturday, October 07, 2006 Analogies are not the best way to make an argument, but they help when debating abstract concepts like "virtual trust". Consider a refrigerated train car. Refrigeration is definitely a "business enabler." Without refrigeration, food producers on the west coast couldn't sell their goods to consumers on the east coast. Refrigeration opened new markets and kept them open. However, refrigeration is not the business. Refrigeration is a means to an end -- namely selling food to hungry people. Refrigeration does not generate value; growing and selling food does. (Refrigeration is only a business for those that sell refrigerated train cars and supporting devices.) You might think "security" is like refrigeration. Like refrigeration, security could be said to "enable" business. Like refrigeration, security does not generate value; selling a product or service through a "secure" channel does. So why is "security" really not refrigeration? The enemy of refrigeration is heat. Heat is an aspect of nature. Heat is not intelligent. Heat does not adapt to overcome the refrigeration technology deployed against it. Heat does not choose its targets. One cannot deter or jail or kill heat. The enemy of "security" is the intruder. The intruder is a threat, meaning a party with the capabilities and intentions to exploit a vulnerability in an asset. Threats are intelligent, they adapt, they persist, they choose, and they react to their environment. In fact, an environment which on Monday seems perfectly "secure" can be absolutely compromised on Wednesday by the release of an exploit in response to Tuesday's Microsoft vulnerability announcements. Returning to the idea of "enablement" -- honestly, who cares? I'll name some other functions that enable business -- lawyers, human resources,

facility staff. The bottom line is that "virtual trust" is an attempt to "align" (a great CISO term) security with "business objectives," just as IT is trying to "align" with business objectives. The reason "IT alignment" has a chance to succeed in creating real business value is that IT is becoming, in itself, a vendor of goods and services. Unless a business is actually selling security -like a MSSP -- security does not generate value. Why is anyone even bothering to debate this? The answer is money. If your work is viewed as a "cost center," the ultimate goal is to remove your budget and fire you. If you're seen as an "enabler," you're at least seen as being relevant. If you can spin "enablement" into "revenue generation," that's even better! Spend $X on security and get $Y in return on investment! Unfortunately that is not possible. Finally, I don't think anyone would consider me "anti-security." I'm not arguing that security is irrelevant. In fact, without security a business can be absolutely destroyed. However, you won't find me saying that security makes anyone money. Some argue that spending money on security prevents greater loss down the line, perhaps by containing an intrusion before it avalanches into an immense compromise. That's still loss prevention. Of course security "enables" business, but enablement doesn't generate revenue; it supports a revenue-generating product or service. This is probably my last word on this in a while. I need to turn back to my own business! https://taosecurity.blogspot.com/2006/10/security-is-notrefrigeration.html Commentary This was one of many posts in my battle against “security ROI.” Many others shared my frustration, such as Bruce Schneier. When everything is framed as “ROI,” it devalues the contribution made by business areas that aren’t easily quantified.

Response to Daily Dave Thread Friday, October 27, 2006 I don't subscribe to the Daily Dave (Aitel) mailing list, but I do keep a link to the archives on my interests page. Some of the offensive security world's superstars hang out on that list, so it makes for good reading. The offensive side really made an appearance with yesterday's thread, where Dave's "lots of monkeys staring at a screen....security?" thread says: My feeling is that IDS is 1980's technology and doesn't work anymore. This makes Sourcefire and Counterpane valuable because they let people fill the checkbox at the lowest possible cost, but if it's free for all IBM customers to throw an IDS in the mix then the price of that checkbox is going to get driven down as well. First, it's kind of neat to see anyone speaking about "IDS" instead of "IPS" here. I think this reflects Dave's background working for everyone's favorite three letter agency. The spooks and .mil types (like me) tend to be the last people to even think about detection these days. Second, it seems to be popular to think of "IDS" as strictly a signature-based technology, as Gadi Evron believes: IDS devices are signature based and try to detect bad behaviour using, erm, a sniffer or equivalent. That hasn't been true for a while, even if you're talking about Snort. Sure, there are tons of signatures, but they're certainly not just for content matching. If you're thinking about Bro, signatures aren't really even the main issue -- protocol anomaly detection is. Dave posts another message that is a little worrisome: Making IDS part of a defense in depth strategy is giving it some credit for actually providing defense, which it doesn't do. The people

who win the IDS game are the people who spend the least money on it. This is why security outsourcing makes money - it's just as worthless as maintaining the IDS yourself, but it costs less. Likewise, Snort is a great IDS solution because it does nothing but it does it cheaper. The technology curve is towards complex, encrypted, asynchronous protocols. The further into time you look, the worse the chances are that sniffing traffic is an answer to anything. The market is slowly realizing this technology's time has passed, but in the meantime lots of people are making giant bus-loads of cash. Good for them. But IDS technology isn't relevant to a security discussion in this day and age and it's not going to be anytime soon. I will agree that many commercial managed security monitoring services are worthless, to the extent that they are ticket- and malware-oriented. However, the idea that Snort "does nothing" is just wrong. Hopefully Dave is just being inflammatory to spur discussion. Sure, Snort is not going to detect an arbitrary outbound encrypted covert channel using port 443. That doesn't mean Snort isn't useful for the hundreds of other attack patterns still seen in the wild. Since the majority of the posters to this thread are offensive, I doubt they have read any of my books. For example, reverse engineering guru Halvar Flake follows up with this insight: I still agree with the concept of replacing an IDS with just a large quantity of tapes on which to archive all traffic. IDSs will never alert you to an attack-in-progress, and by just dumping everything onto a disk somewhere you can at least do a halfways-decent forensics job thereafter. Since everybody and his dog is doing cryptoshellcode these days you won't be all-knowing, but at least you should be able to properly identify which machine got owned first. Welcome to network security monitoring, albeit at least a decade late. The fact that the criminal underground is using covert and encrypted channels now doesn't mean they weren't used 10 plus years ago, when smart people in the spook and .mil worlds needed a way to gain some sort of awareness of

network activities by more dangerous adversaries. Most respected IDS old-school critic Tom Ptacek isn't convinced: I am waiting for someone to tell me the story about how an IDS saved their bacon. I'm not interested in the story about how it found the guy with the spyware infection or the bot installation; secops teams find those things all the time in their firewall logs and they don't freak out about it when they do. The last time I manned a console full-time as a "SOC monkey," for the Air Force in 1998-2001 and at Ball Aerospace in 2001-2002, we found intrusions all the time. I expect several people in the #snort-gui channel where I idle on irc.freenode.net also have stories to share. I'll have more to say on this later. Tom continues: This "signature" vs. "real intrusion detection" thing is a big red herring. Intrusion detection has been an active field of research for over 15 years now and apart from Tripwire I can't point to anything operationally valuable it has produced. This sounds like the "Snort is worthless" argument Dave proposed. Finally: Halvar, when you figure out how to parallelize enough striped tape I/O to keep up with a gigE connection, then, Halvar, then I will respect you. This is another common argument. Most every detection critic argues their pipes are too big to do any useful full content collection. Let's just say that is not a problem for everyone. Many, many organizations connect to the Internet using OC-3s (155 MBps), fractional OC-3s, T-3s (45 Mbps) and below. Full content collection, especially at the frac OC-3 (say 60 Mbps) and lower, is no problem -- even for commodity hardware, if you use Intel NICs, a solid OS, and fast, large hard drives. Even if you drop some small percentage of the traffic, so what? What are the odds that you drop

everything that is relevant to your investigation, all the time? What if your pipes really are too big for full content collection, say in the core of the network? I would argue that's not the place to do full content collection, but let's say you are told to "do something" about detection in a high-bandwidth environment. That's where the other NSM data types come into play -- namely session data and statistical data. Can't save every packet, or you don't want to? Save sessions describing who talked to who, when, using what protocols and services, and how much data was transferred. That is absolute gold for traffic analysis, and it doesn't matter if it's encrypted. At the very least you can profile the traffic statistically. The root of this problem with this discussion is the narrow idea that a magic box can sit on an arbitrary network and tell you when something "bad" happens. That absolutely won't be possible, at least not for every imaginable "bad" case. The "IDS" has been pigeonholed in the same way the "firewall" has -- as a product and not a real system. A standard "IDS" isn't an "intrusion detection system" at all; it's an attack indication system. Snort gives you a hint that something bad might be happening. You need the rest of your NSM data to determine what is going on. You can also start with non-alert NSM data (as described in this war story) and investigate intrusions. Similarly, a firewall isn't necessarily stopping attacks; it should be enforcing an access control policy. A real detection system identifies deviations from policy, and perhaps should be called a network policy violation detector. A real network policy enforcement system prevents policy violations. The point is that neither has to be boxed into an appliance and sold as a "NPVD" or "NPES". (As you can see, acronyms which tend to accurately describe a system's functionality are completely marketing-unfriendly.) I'll conclude by saying that I agree with Dave about "monkeys" staring at screens. Many of those sorts of analysts are not doing NSM-centric work that would truly discover intrusions. Yes, the network is a tough place to detect. However, I've argued before that in an age of ubiquitous kernel-mode

rootkits, NSM is needed more than ever. If you can't trust a rootkit-controlled host to tell you what's happening, why would you ignore the network? Sure, the traffic could be covert, encrypted, and so forth, but if the pattern of activity isn't normal you can verify that at least something suspicious is happening. It's time for another book. https://taosecurity.blogspot.com/2006/10/response-to-daily-davethread.html Commentary I believe my responses to the details of the post speak for themselves. It reminds me that those who primarily perform offensive security shouldn’t necessarily be expected to be experts in defense. Also, no other book arrived until 2013!

Incorrect Insider Threat Perceptions Wednesday, December 27, 2006 Search my blog for "insider threat" and you'll find plenty of previous posts. I wanted to include this post in my earlier holiday reading article, but I'd figure it was important enough to stand alone. I'm donning my flameproof suit for this one. The cover story for the December 2006 Information Security magazine, Protect What's Precious by Marcia Savage, clued me into what's wrong with security management and their perceptions. This is how the article starts: As IT director at a small manufacturer of specialized yacht equipment, Michael Bartlett worries about protecting the firm's intellectual property from outsiders. But increasingly, he's anxious about the threat posed by trusted insiders. His agenda for 2007 is straightforward: beef up internal security. "So far, we've been concentrating on the perimeter and the firewall, and protecting ourselves from the outside world," says Bartlett of Quantum Marine Engineering of Florida. "As the company is growing, we need to take better steps to protect our data inside." Bartlett voices a common concern for many readers who participated in Information Security's 2007 Priorities Survey. For years, organizations' security efforts focused on shoring up network perimeters. These days, the focus has expanded to protecting sensitive corporate data from insiders--trusted employees and business partners--who might either maliciously steal or inadvertently leak information. That sounds reasonable. As I see it, however, this shift to focus on the "inside threat" risks missing threats that are far more abundant. First things first. Inside threat is not new. Check out the lead line from a security story:

You've heard it time and time again: Insiders constitute the greatest threat to your organization's security. But what can you do about it? That's the lead from a July 2000 Information Security article called "Managing the Threat from Within". Let's think about this for a moment. InfoSecMag in Dec 2006 mentioned that "organizations' security efforts focused on shoring up network perimeters," so turning inwards seems like a good idea. Wasn't looking inwards a good idea already in 2000? I'm probably not communicating my point very well, so here is another excerpt from the same Dec 2006 article: Glen Carson, information security officer for California's Victim Compensation and Govern-ment Claims Board, says the problem stems more from a lack of user education than poor authentication. His priority is education: explaining to the 350 users in his agency why data security is important and how it will help them in the long run. "We recently completed a third-party security assessment and got a good test of our exterior shell, but internally our controls were lacking," he says. I wonder if that "good test of our exterior shell" included client-side exploitation? I doubt it. Do you see where I am going? Here's one other excerpt. Mass-mailing worms may have gone the way of the boot-sector virus, but that does mean security managers don't have malware on their radar... Yet there hasn't been a major outbreak since the Sasser worm in 2004, so what's all the fuss? Security managers will tell you that the lack of activity says a lot about the maturation of prevention technologies, advances in automated patch management tools, effectiveness of user awareness campaigns, and overall layered defense strategies.

Ok, are you laughing now? The reason why we're not seeing massive worms is that there's no money to be made in it. Everything is targeted these days. Even InfoSecMag admits it: It's no secret that hacker motivations have changed from notoriety to money. Many of today's worms carry key-logging trojans that make off with your company's most precious assets. Attacks are targeted, often facilitated by insiders. Rather than relying on social engineering to move infected email attachments from network to network, hackers are exploiting holes in browsers, using Javascript attacks to hijack Web sessions and steal data. Exactly (minus the "facilitated by insiders" part -- says who, and why bother when remote client-side attacks are so easy?) Here's my point: why are security managers so worried about Eva the Engineer or Stan the Secretary when Renfro the Romanian is stealing data right now. I read somewhere (I can't cite it now) that something like 70 million hosts on the Internet may be under illegitimate control. It may make sense to speak more of the number of hosts not compromised instead of those that are compromised. In 2004 the authors of the great book Rootkit claimed all of the Fortune 500 was 0wned. Why do we think it's any different now? It's possible that taking steps to control trusted insiders will also slow down outsiders who have gained a foothold inside the enterprise. However, I don't see too many people clamping down on privileged users, and guess who powerful outsiders will be acting as when they compromise a site? Of course we should care about insiders, and the insider threat is the only threat you can control. Outsiders are far more likely to cause an incident because, especially with the rise of client-side attacks, they are constantly interacting with your users. The larger the number of users you support, the greater the number of targets an outsider can exploit. Sure, more employees means more insider threats, but let's put this in perspective! The fact that you offer a minimal external Internet profile does not mean you're "safe" from outsiders and that you can now shift to inside threats. The outsiders are deadlier now than they've ever been. They are in your networks

and acting quietly to preserve their positions. Give Eva and Stan a break and don't forget Renfro. He's already in your company. https://taosecurity.blogspot.com/2006/12/incorrect-insider-threatperceptions.html Commentary There’s a lot packed into this post. Suffice it to say that it’s important to instrument and investigate your environment so that you know what matters to you, and not worry so much about what you read in the media!

How Many Spies? Wednesday, December 27, 2006 This is a follow-up to Incorrect Insider Threat Perceptions. I think security managers are worrying too much about insider threats compared to outsider threats. Let's assume, however, that I wanted to spend some time on the insider threat problem. How would I handle it? First, I would not seek vulnerability-centric solutions. I would not even really seek technological solutions. Instead, I would focus on the threats themselves. Insider threats are humans. They are parties with the capability and intention to exploit a vulnerability in an asset. You absolutely cannot stop all insider threats with technical solutions. You can't even stop most insider threats with technical solutions. You should focus on non-technical solutions. (Ok, step two is technical.) Personnel screening: Know who you are hiring. The more sensitive the position, the deeper the check. The more sensitive the position, the greater the need for periodic reexamination of a person's threat likelihood. This is common for anyone with a federal security clearance, for example. Conduct legal monitoring: Make it clear to employees that they are subject to monitoring. The more sensitive the position, the greater the monitoring. Web surfing, email, IM, etc. are subject to inspection and retention within the boundaries of applicable laws. Develop and publish security policies: Tell employees what is proper and improper. Let them know the penalties for breaching the policy. Make them resign them annually. Discipline, fire, or prosecute offenders: Depending on the scope of an infraction, take the appropriate action. Regulations without enforcement (cough - HIPAA - cough) are worthless. Deterrence: Tell employees all of the above regularly. It is important for

employees who might dance with the dark side to fully understand the consequences of their misdeeds. At the end of the day, you should wonder "how many spies?" are there in your organization. Consider the hurdles an insider threat must leap in order to carry out an attack and escape justice. He must pass your background check, either by having a clean record or presenting an airtight fake record. He must provide a false name and mailing address to frustrate attempts to catch him. He must evade detection by your internal audit systems. He must have an escape plan to leave the organization and resurface elsewhere. I could continue, but imagine those difficulties compared to a remote cyber intruder in Russia who conducts a successful client-side attack on your company? Now which attack is more likely -- the insider or the outsider? https://taosecurity.blogspot.com/2006/12/how-many-spies.html Commentary Close access operations are high risk for perpetrators. This post explains why it’s likely, simply from a cost and benefit perspective, that outsiders are responsible for more intrusions than insiders. I categorically reject the notion that once an outsider obtains credentials on a target they suddenly become “insiders.” That’s just the defeated insider threat crowd looking for validation of their debunked thesis. I will grant that, because of their access and knowledge of internal systems, it’s possible for true insider threats to inflict far greater damage than the average outsider. One need only look at the breaches suffered by the intelligence community to appreciate that sad fact.

What Do I Want Saturday, January 27, 2007 If you've read this blog for a while, or even if you've just been following it the last few months, you might be saying "Fine Bejtlich, we get it. So what do you want?" The answer is simple: I want NSM-centric techniques and tools to be accepted as best practices for digital security. I don't say this to sell products. I say this because it's the best chance we have of figuring out what's happening in our enterprise. NSM means deploying sensors to collect statistical, session, full content, and alert data. NSM means having high-fidelity, actionable data available for proactive inspection when possible, and reactive investigation every other time. NSM means not having to wait to hire a network forensics consultant who brings his own gear to the enterprise, hoping for the intruder to make a return appearance while the victim is instrumented. I'd like to see organizations realize they need to keep track of what's happening in their enterprise, in a content-neutral way, similar to the services provided by a cockpit voice recorder and a flight data recorder (CVR, FDR). This is critical: what does content-neutral mean? The CVR doesn't start recording when it detects the pilot saying "help" or "emergency." The FDR doesn't start recording when the plane's altitude drops below 1000 feet. Rather, both devices are always recording, because those who deploy CVRs and FDRs know they don't know what will happen. This is the opposite of soccer-goal security, where you pick a defensive method and possibly miss everything else. Network-centric access control, implemented by firewalls and the like, is pretty much a given. (I'm not talking about NAC, which is an acronym for Cisco's Network Admission Control and not Network Access Control, for pity's sake.) Ignoring the firewall-dropping folks at the Jericho Forum and Abe Singer, everyone (especially auditors and regulators) recognizes

firewalls are necessary and helpful components of network security. This is undeniably true when one abandons the idea of the firewall as a product and embraces the firewall as a system, as rightly evangelized by Marcus Ranum and described in the Firewall FAQ: 2.1 What is a network firewall? A firewall is a system or group of systems that enforces an access control policy between two or more networks. The actual means by which this is accomplished varies widely, but in principle, the firewall can be thought of as a pair of mechanisms: one which exists to block traffic, and the other which exists to permit traffic. Some firewalls place a greater emphasis on blocking traffic, while others emphasize permitting traffic. Probably the most important thing to recognize about a firewall is that it implements an access control policy. Notice this description doesn't mention Pix, or Checkpoint, or Pf, or any other box. Those words apply equally well to router ACLs, layer 3-4 firewalls, "IPS," "deep packet inspection" devices -- whatever. It's about blocking and permitting traffic. Returning to the main point: we've got to get network visibility and awareness as deeply into the digital security mindset as network-centric access control (via firewalling). How can you possibly consider blocking or permitting traffic if you don't even know what's happening in the enterprise? NSM will answer that question for you. Build yourself a network data recorder today, and learn to interpret what you're seeing. You'll sleep worse in the beginning, but better as you get a grip on your security posture -managing by fact, not belief. https://taosecurity.blogspot.com/2007/01/what-do-i-want.html Commentary Rather than “I want NSM-centric techniques and tools to be accepted as best practices for digital security,” these days I would like to see anyone responsible for running a network to integrate NSM into their operations. It doesn’t matter if it’s a “best practice” or not. It should be a minimum practice.

Proactive vs Reactive Security Tuesday, March 20, 2007 Whenever I hear someone talk about the merits of "proactive" security vs "reactive" security I will politely nod, but you may notice a tightening of my jaw. I can't stand these sorts of comparisons. When I hear people praise proactive measures they're usually talking about "stopping attacks" rather than "watching them." Since a good portion of my technical life is spent cleaning up the messes left by people who put faith in preventing intrusions, I am a little jaded. Before I go any further, believe me, I would much rather not have intrusions occur at all. I would much rather prevent than detect and respond to intrusions. The fact of the matter is that intrusions still happen and that proactive measures aren't always that great. In fact, sometimes so-called proactive measures are worse than reactive or passive ones. How can that be? Kelly Jackson Higgins' latest article Grab Fingerprint, Then Attack provides an example. She writes the following: First you determine if an IDS/IPS is sitting at the perimeter, and then "fingerprint" it to find out the brand of the device, says the hacker also known as Mark Loveless, security architect for Vernier Networks. By probing the devices, "You can extrapolate what brand of IPS is blocking them and use that to plan your attack." Different IDS/IPS products block different threats, so an attacker can use those characteristics to gather enough intelligence to pinpoint the brand name, he says. And it's not hard to distinguish an IDS from an IPS: If you can access XYZ before the attack, but not after, it's an IPS. And if there are delays in blocking your traffic, it could be an admin reading the IDS logs, Loveless says. This concept is as old as dirt, dating all the way back to fingerprinting firewalls. However, it illustrates my point very well. A "proactive" device like an IPS would block traffic it deems malicious. An intruder smart enough to want to identify and evade said IPS could do so using test traffic, then

launch an attack that sails through the IPS -- which at that point is ignorant and ineffective. The only reason the intruder could accomplish this task is that the "proactive" nature of the IPS revealed its operation, thereby providing intelligence to the intruder. In aggregate security has been degraded by a "proactive" device. Contrast that scenario with that of the lowly, "reactive," passive network forensics appliance. All it does is record what it sees. It doesn't stop anything. It's so quiet no one knows it is there -- including the intruder. Of course it isn't blocking anything, but it is providing Network Security Monitoring data. Properly configured and used it can act as a sort of intrusion detection system as well. In aggregate security has been improved by a "reactive" or passive device. I hope this post has challenged the conventional wisdom in the same way that my diatribes against mandatory anti-virus installation may have done. I think one way to overcome the problems caused by the active device is to complement it with the passive one, but most organizations emphasize "prevention" over all else and discard detection and response. https://taosecurity.blogspot.com/2007/03/proactive-vs-reactivesecurity.html Commentary I had forgotten that red teams and possibly intruders used to take steps to identify the presence of IPS devices in order to evade them.

Taking the Fight to the Enemy Friday, March 23, 2007 ShmooCon started today. ShmooCon leader Bruce Potter finished his opening remarks by challenging the audience to find anyone outside of the security community who cares about security. I decided to take his idea seriously and I thought about it on the Metro ride home. It occurred to me that the digital security community fixates on vulnerabilities because that is the only aspect of the Risk Equation we can influence. Lines of business control assets, so we can't decrease risk by making assets less valuable. (That doesn't even make sense.) We do not have the power or authority to remove threats, so we can't decrease risk by lowering the attacks against our assets. (Threat mitigation is the domain of law enforcement and the military.) We can only address vulnerabilities, but unless we develop the asset ourselves we're stuck with whatever security the vendor provided. I would like to hear if anyone can imagine another realm of human endeavor where the asset owner or agent is forced to defend his own interests, without help from law enforcement or the military. The example can be historical, fictional, or contemporary. I'm reminded of Wells Fargo stagecoaches being robbed as they crossed the West, forcing WF to hire private guards with guns to defend company assets in transit. As a fictional example, Sherlock Holmes didn't work for Scotland Yard; victims hired the Great Detective to solve crimes that the authorities were too slow or unwilling to handle. As I've said many times before, we are wasting a lot of time and money trying to "secure" systems when we should be removing threats. I thought of this again last night while watching Chris Hansen work with law enforcement to take more child predators off the streets. Imagine if I didn't have law enforcement deterring and jailing criminals like that. I'd have to wrap my kids in some sort of personal tank when I send them to school, and they'd still probably end up in harm's way. That's the situation we face on the Internet.

There's no amount of bars over windows, high fences, or other defenses that will stop determined intruders. Removing or deterring the intruders is history's lesson. This FCW article has the right idea: The best defense against cyberattacks on U.S. military, civil and commercial networks is to go on the offensive, said Marine Gen. James Cartwright, commander of the Strategic Command (Stratcom), said March 21 in testimony to the House Armed Services Committee. “History teaches us that a purely defensive posture poses significant risks,” Cartwright told the committee. He added that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests...” The Stratcom commander told the committee that the United States is under widespread, daily attacks in cyberspace. He added that the country lacks dominance in the cyberdomain and that it could become “increasingly vulnerable if we do not fundamentally change how we view this battle space.” Put me in, coach. I'm ready to play, today. https://taosecurity.blogspot.com/2007/03/taking-fight-to-enemy.html Commentary I noticed early in the post I wrote: “Lines of business control assets, so we can't decrease risk by making assets less valuable. (That doesn't even make sense.)” Later I realized that it is possible to reduce asset value, in a piece I wrote for Brookings in 2015 titled “New cybersecurity mantra: “If you can’t protect it, don’t collect it.”

I also noticed that I quoted one of my favorite songs, Centerfield, by John Fogerty.

Threat Deterrence, Mitigation, and Elimination Friday, March 30, 2007 A comment on my last post prompted me to answer here. My thesis is this: a significant portion, if not the majority, of security in the analog world is based on threat deterrence, mitigation, and elimination. Security in the analog world is not based on eliminating or applying countermeasures for vulnerabilities. A vulnerability-centric approach is too costly, inconvenient, and static to be effective. Consider the Metro subway in DC. There are absolutely zero physical barriers between the platform and the trains. If evil attacker Evelyn were so inclined, she could easily push a waiting passenger off the platform into the path of an arriving train, maiming or killing the person instantly. Why does this not happen (regularly)? Evelyn is presumably a rational actor, and she is deterred by vigilante justice and the power of the legal system. If she killed a Metro passenger in the state of Virginia she would probably be executed herself, or at the very least spend the rest of her life in prison. Hopefully there are few people like Evelyn in the world, but would more Metro passengers be murdered if there were no attribution or apprehension of the killers? How do you think the Metro board would react to such an incident? 1. 2. 3. 4. 5. 6.

Build barriers to limit the potential for passengers to land in front of moving trains Screen passengers as they enter Metro stations Mandate trains to crawl within reach of waiting passengers Add Metro police to watch for suspicious individuals Add cameras to watch all Metro stations Lobby Congress to increase penalties

My ranking is intentional. 1 would never happen; it is simply too costly when weighed against the risks. 2 would be impossible to implement in any meaningful fashion and would provoke a public backlash. 3 might happen for a brief period, but it would be abandoned because it would slow the number of trains carrying passengers. 4 might happen for a brief period as well, but the costs of additional personnel make it an unlikely permanent solution; it's also ineffective unless the police are right next to a likely incident. 5 and 6 could happen, but they are only helpful for deterrence -- which is not prevention. Earlier I said Evelyn is a rational actor, so she could presumably be deterred. She could also be mitigated or eliminated. Imagine if Evelyn's action was a ritual associated with gang membership. Authorities could identify and potentially restrict gang members from entering the Metro. (Difficult? Of course. This is why deterrence is a better option.) Authorities could also infiltrate and/or destroy the gang. Irrational actors cannot be deterred. They may be mitigated and/or eliminated. Forces of nature cannot be deterred either. Depending on their scope they may be mitigated, but they probably cannot be eliminated. Evelyn's house cannot be built for a reasonable amount of money to withstand a Category V hurricane. Such a force of nature cannot be deterred or eliminated. Given a large enough budget Evelyn's house could be built to survive such a force, so mitigation is an option. Insurance is usually how threats like hurricanes are mitigated, however. Everyone approaches this problem via the lens of their experience and capabilities. Coders think they can code their way out of this problem. Architects think they can design their way out. I am mainly an operator and, in some ways, an historian. I have seen in my own work that prevention eventually fails, and by learning about the past I have seen the same. In December 2005 I wrote an article called Engineering Disasters for a magazine, and in the coming weeks a second article with more lessons for digital security engineers will be published in a different venue. I obviously favor whatever cost-effective, practical trade-offs (not

solutions) we can implement to limit the risks facing digital assets. I am not saying we should roll over and die, hoping the authorities will catch the bad guys and prevent future crimes. Nevertheless, the most pressing problem in digital security is attribution and apprehension of those perpetrating crimes involving information resources. Until we take the steps necessary to address that problem, no amount of technical vulnerability remediation is going to matter. https://taosecurity.blogspot.com/2007/03/threat-deterrence-mitigationand.html Commentary We live in a world primarily protected via threat deterrence by virtue of the sovereign’s monopoly on the use of force, not vulnerability mitigation or asset value reduction. Because owners of digital assets are generally unable to perform threat deterrence, and the sovereign’s attempts to deter threats are largely ineffective, asset owners turn to vulnerability mitigation while largely fantasizing that they are deterring threats.

FISMA Dogfights Friday, April 13, 2007 My favorite show on The History Channel is Dogfights. Although I wore the US Air Force uniform for 11 years I was not a pilot. I did get "incentive" rides in T-37, F-16D, and F-15E jets as a USAFA cadet. Those experiences made me appreciate the rigor of being a fighter pilot. After watching Dogfights and learning from pilots who fought MiGs over North Vietnam, one on six, I have a new appreciation for their line of work. All that matters in a dogfight is winning, which means shooting down your opponent or making him exit the fight. A draw happens when both adversaries decide to fight another day. If you lose a dogfight you die or end up as a prisoner of war. If you're lucky you survive ejection and somehow escape capture. Winning a dogfight is not all about pilot skill vs pilot skill. Many of the dogfights I watched involved American pilots who learned enemy tactics and intentions from earlier combat. Some of the pilots also knew the capabilities of enemy aircraft, like the fact that the MiG 17 was inferior to the F-8 in turns below 450 MPH. Intelligence on enemy aircraft was derived by acquiring planes and flying them. In some cases the enemy reverse engineered American weapons, as happened with the K-13/AA-2 Atoll -- a copy of the Sidewinder. All of this relates to FISMA. Imagine if FISMA was the operational theme guiding air combat. Consultants would spend a lot of time and money documenting American aircraft capabilities and equipment. We'd have a count of every rivet on every plane, annotated with someone's idea that fifty rivets per leading edge is better than forty rivets per leading edge. Every plane, every spare part, and every pilot would be nicely documented after a four to six month effort costing millions of dollars. Every year a report card would provide grades on fighter squadrons FISMA reports. What would happen to these planes when they entered combat? The

FISMA crowd would not care. American aircraft could be dropping from the sky and it would not matter to FISMA. All of the FISMA effort creates a theoretical, paper-based dream of how a "system" should perform in an environment. When that system -- say, a jet fighter -- operates under real life combat conditions, it may perform nothing like what the planners envisioned. Planners range from generals setting requirements for a new plane, engineers designing the plane, and tacticians imagining how to use the plane in combat. Perhaps the guns jam in high-G turns. Perhaps the missiles never acquire lock and always miss their targets. Maybe the enemy has stolen plans for the aircraft (or an actual aircraft!) and knows that the jet cannot perform as well as the enemy plane doing vertical rolling scissors. Furthermore, the enemy may not act like the planners imagined. This is absolutely crucial. The enemy may have different equipment or tactics, completely overpowering friendly capabilities. Maybe FISMA would address these issues in three years, the next time a FISMA report is due. Meanwhile, the US has lost all its pilots and aircraft, along with control of its airspace. Maybe this analogy will help explain the problems I have with FISMA. I already tried an American football analogy in my post Control-compliant vs Field-Assessed Security. My bottom line is that FISMA involves control compliance. That is a prerequisite for security, since no one should field a system known to be full of holes. However, effective, operational security involves field assessment. That means evaluating how a system performs in the real world, not in the mind of a consultant. Field-assessed security is absolutely missing in FISMA. Don't tell me the tests done prior to C&A count. They're static, controlled, and do not reflect the changing environment found on real networks attacked by real intruders. Incidentally, I also really liked the BBC series Battlefield Britain and I may check out the other History Channel series Shootout!.

https://taosecurity.blogspot.com/2007/04/fisma-dogfights.html Commentary This is another post from the era where government agencies measured security via checklists and paperwork, all the while making no real change in the security of their environments. They were thoroughly infiltrated by opportunistic and targeted threat actors, regardless of whether they scored an A or an F.

Fight to Your Strengths Wednesday, April 18, 2007 Recently I mentioned the History Channel show Dogfights. One episode described air combat between fast, well-turning, lightly-armored-and-gunned Japanese Zeroes and slower, poor-turning, heavily-armored-and-gunned American F6F Hellcats. The Marine Top Gun instructor/commentator noted the only way the Hellcat could beat the Zero was to fight to its strengths and not fight the sort of battle the Zero would prefer. Often this meant head-tohead confrontations where the Hellcat's superior armor and guns would outlast and pummel the Zero. When I studied American Kenpo in San Antonio, TX, my instructor Curtis Abernathy expressed similar sentiments. He said "Make the opponent fight your fight. Don't try to out-punch a boxer. Don't try to out-kick a kicker. Don't try to wrestle a grappler." And so on. I thought about these concepts today waiting in another airport. I wondered what sorts of strengths network defenders might have, and if we could try forcing the adversary into fighting our fight and not theirs. Here are some preliminary thoughts on strengths network defenders might have, and how they can work against intruders. Knowledge of assets: An intruder pursuing a targeted, server-side attack will often try to locate a poorly-configured asset. The act of conducting reconnaissance to locate these assets results in the opponent fighting your fight -- if you and/or your defensive systems possess situational awareness. It is not normal for remote hosts to sweep address space for active hosts or individual hosts for listening services. Defenders who manually or automatically take defensive actions when observing such actions can implement blocks that will at least frustrate the observed source IP. Knowledge of normal behavior: An intruder who compromises an asset will try to maintain control of that asset. This may take the form of an

outbound IRC-based command-and-control channel, an inbound or outbound encrypted channel, or many other variations. To the extent that the intruder does not use a C&C channel that looks like normal behavior for the victim, the intruder is fighting your fight. Whenever you constrain network traffic by blocking, application-aware proxying, and throttling, you force the intruder into using lanes of control that you should architect for maximum policy enforcement and visibility. Diversity: Targets running Windows systems or PHP-enabled Web applications are much more likely to be compromised and manipulated by intruders. Attack tools and exploits for these platforms are plentiful and wellunderstood by the enemy. If you present a different look to the intruder, you are making him fight your fight. An intruder who discovers a target running an unknown application on an unfamiliar OS is, at the very least, going to spend some time researching and probing that target for vulnerabilities. If you possess situational awareness, diversity buys time for defensive actions. Situational awareness: A well-instrumented network will possess greater knowledge of the battlespace than an intruder. A network architected and operated with visibility in mind provides greater information on activity than one without access to network traffic. Unless the intruder implements his own measures to expand his visibility (compromising a switch to enable a SPAN port, controlling a router, etc.), the defender will know more about the scope of an attack than the intruder. Of course, the intruder will have absolute knowledge of his activities because he is executing them, possibly via an encrypted channel. These are some initial ideas recorded in an airport. I may augment them as time permits. Notice that if you don't know your assets or normal behavior, if you run the same vanilla systems as the rest of the world, and you don't pay attention to network activity, you have zero strengths in the fight beyond (hopefully) properly configured assets. We all have those, right? At the risk of involving myself in a silly debate, I'd like to briefly mention how these factors affect the decision to run OpenSSH on a nonstandard port. Apparently several people with a lot of free time have been vigorously

arguing that "security through obscurity" is bad in all its forms, period. I don't think any rational security professional would argue that relying only upon security through obscurity is a sound security policy. However, integrating security through obscurity with other measures can help force an intruder to fight your fight. Here's an example. I'm sure you've seen many brute force login attacks against OpenSSH services over the past year or two years. I finally decided I'd seen enough of these on my systems, so I moved sshd to a nonstandard port. Is that security through obscurity? Probably. Have I seen any more brute force attacks against sshd since changing the port? Nope. As far as I'm concerned, a defensive maneuver that took literally 5 seconds per server has been well worth it. My logs are not filling with records of these attacks. I can concentrate on other issues. Now, what happens if someone really takes an interest in one or more of my servers? In order to find sshd, he needs to port scan all 65535 TCP ports. That activity is going to make him fight my fight, because scanning is way outside the normal profile for activity involving my servers. Will he eventually find sshd? Yes, unless my systems automatically detect the scan and block it. Are there ways to make the intruder's ability to connect to sshd even more difficult? Sure -- take a look at Mike Rash's Single Packet Authorization implementations. The bottom line is that a defensive action which cost me virtually nothing has increased the amount of work the intruder must perform to attack sshd. If I knew my action to change sshd's port could be discovered by the intruder with minimal effort (perhaps they have visibility of the change via illicit monitoring) then obscurity has been lost and the change is not worthwhile. As a final thought, it's paramount to consider cost when making security decisions. If altering the sshd port had required buying new software licenses, hardware, personnel training, etc., it would not have been worth the effort. I would be interested in hearing your thoughts on ways to get the intruder

to fight your fight. These are all strictly defensive measures, since offense is usually beyond the rules for most of us. https://taosecurity.blogspot.com/2007/04/fight-to-your-strengths.html Commentary In this post I showed how the reflexive “security through obscurity is bad!” response from “security professionals” doesn’t make a lot of sense. I won’t rehash the argument in this comment.

Vulnerability-Centric Security Thursday, May 10, 2007 The vehicle [originally pictured in the post] is a Mine Resistant Ambush Protected vehicle, the US Army's replacement for the Humvee. I read about this vehicle in Army Times. That article said: At a meeting to be held this week, according to a Pentagon source who spoke on condition of anonymity, the Army’s leadership is expected to request $9 billion for 9,000 MRAPs to be fielded through fiscal year 2008, with another 8,700 for fiscal 2009. That's $1 million per vehicle. I have a sinking feeling that although the new vehicle is "Mine Resistant," the "Ambush Protected" part will be tested by unpredictable, creative adversaries. What does this teach us about digital security? Frequently I hear people refer to the "if cars were like Windows" analogy. Let's take a look at cars and PCs, given the MRAP is really just a fancy car. 1. A car that doesn't start may be like a PC that doesn't boot. It could be the fault of the manufacturer or the owner, depending on maintenance, etc. If it's the manufacturer's fault, they could be held responsible for the problem. 2. A car that behaves erratically or in an unsafe manner while being driven may be like a PC that behaves erratically or crashes. It could be the fault of the manufacturer or the owner, depending on maintenance, etc. If it's the manufacturer's fault, they could be held responsible for the problem. 3. A car that gets hit by a boulder dropped from a bridge may be like a PC that is attacked by an exploit. This is not the fault of the driver or PC operator -- it's the fault of the threat dropping the boulder and the intruder launching the exploit. (Even if the PC is not patched, it's not the victim's "fault." If you

can't accept that, consider the PC fully patched and the vulnerability a zeroday.) In cases 1 and 2, we could hold either the owner or the manufacturer responsible for the problem, depending on the circumstances. In case 3, the threat is responsible. Unfortunately, few owners are in a position to do anything about threats. If we take a vulnerability-centric approach, we end up driving vehicles like the MRAP and building layers of security around PCs (anti-virus, network firewalls, etc.) In both cases the mitigation is costly and ultimately ineffective, because the threat remains free to devise new and ingenious ways to inflict his will against the target. Thinking we can build "invulnerable" vehicles like the MRAP is like Bruce Schneier thinking we can build invulnerable software. Sure, you can make more attack-resistant vehicles and software, but for what cost? Ultimately the threat must be directly addressed. No one thinks the way to peace in Iraq is by giving every Iraqi a bunker in which to live and a MRAP to drive. Why do people think we can do that with software? https://taosecurity.blogspot.com/2007/05/vulnerability-centricsecurity.html Commentary I remember this post attracted a few keyboard warriors who wanted to claim I was anti-soldier because I didn’t praise the MRAP. They completely missed the point of this post, which was about software. It wasn’t the first time, and it wouldn’t be the last.

Threat Model vs Attack Model Tuesday, June 12, 2007 This is just a brief post on terminology. Recently I've heard people discussing "threat models" and "attack models." When I reviewed Gary McGraw's excellent Software Security I said the following: Gary is not afraid to point out the problems with other interpretations of the software security problem. I almost fell out of my chair when I read his critique on pp 140-7 and p 213 of Microsoft's improper use of terms like "threat" in their so-called "threat model." Gary is absolutely right to say Microsoft is performing "risk analysis," not "threat analysis." (I laughed when I read him describe Microsoft's "Threat Modeling" as "[t]he unfortunately titled book" on p 310.) I examine this issue deeper in my reviews of Microsoft's books. In other words, what Microsoft calls "threat modeling" is actually a form of risk analysis. So what is a threat model? Four years ago I wrote Threat Matrix Chart Clarifies Definition of "Threat", which showed the sorts of components one should analyze when doing threat modeling. I wrote: It shows the five components used to judge a threat: existence, capability, history, intentions, and targeting. That is how one models threats. It has nothing to do with the specifics of the attack. That is attack modeling. Attack modeling concentrates on the nature of an attack, not the threats conducting them. I mentioned this in my review of Microsoft's Writing Secure Code, 2nd Ed: [W]henever you read "threat trees," [in this misguided Microsoft book] think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.

That is still true -- Bruce Schneier's work on attack trees and attack modeling is correct in its terminology and its applications. Attack trees are a way to perform attack modeling. Attack modeling can be done separate from threat modeling, meaning one can develop an attack tree that any sufficient threat could execute. This understanding also means most organizations will have more useful results performing attack modeling and not threat modeling, because most organizations (outside law enforcement and the intel community) lack any real threat knowledge. With the help of a pen testing team an organization can develop realistic attack models and therefore effective countermeasures. This is Ira Winkler's point when he says most organizations aren't equipped to deal with threats and instead they should mitigate vulnerabilities that any threat might attack. This does not mean I am embracing vulnerability-centric security. I still believe threats are the primary security problem, but only those chartered and equipped to deter, apprehend, prosecute, and incarcerate threats should do so. The rest of us should focus our resources on what we can, but take every step to get law enforcement and the military to do the real work of threat removal. https://taosecurity.blogspot.com/2007/06/threat-model-vs-attackmodel.html Commentary Nothing has changed since this post. Threat models and attack models are still confused. Language can be depressing. The only time we seem to make progress is when a job becomes a profession, and the profession requires testing to a standard and granting a license, as happens in law, medicine, and professional engineering. Until we reach that point, security is just another job open to interpretation.

Kung Fu Wisdom on Threats Tuesday, August 07, 2007 Given the seriousness of my last post, I thought some words of wisdom from the great Kwai Chang Caine would improve everyone's mood. Consider a scene from Kung Fu. Caine is talking to an Amish man who says "When someone hits me with a stick, I have three choices: I can hit him back, I can let him hit me again, or I can run away.” Caine replies with a fourth option: "You can take the stick away from him." The unspoken element of Caine's reply is that you can peacefully disarm an opponent, which may require Shaolin-like skill. Most people do not have such skills and are stuck with one of the three previous options. None of these work approaches for digital security. If you hit the intruder back, unless he's incapacitated he remains ready for another attack. If you do knock out one of his drones, he activates number two of ten thousand. If you let him attack again, you lose a second time. The threat is also free to hit again. If you run away by disconnecting from the network, you lose all the network's benefits. Taking away the stick (perhaps by criminalizing "hacker tools") only punishes law-abiding citizens. If you do peacefully shut down a drone, again he activates number two of ten thousand. The answer to this problem is you apprehend the criminal for assault, prosecute, and incarcerate. "Rehabilitation" is nice, but at least for the duration of his prison time he can't hurt those outside prison. You may enjoy a deterrence effect, although this is debatable. Regardless, this is the only

way to deal with a threat once it has obtained evil capabilities and intentions. (You can argue for shaping the threat's life such that those evil capabilities and intentions are not reached, but that's an issue for social scientists.) It's all about the risk equation: Risk = Asset value * Vulnerability * Cost No one is deploying worthless assets. 30+ years of trying to develop resources that are vulnerability-free has failed. Only the threat component has a chance to be reduced, thereby reducing overall risk (assuming it outpaces the asset and vulnerability categories, which is problematic still). https://taosecurity.blogspot.com/2007/08/kung-fu-wisdom-onthreats.html Commentary Anytime I can quote one of my favorite TV shows of all time, Kung Fu, it’s a good day. Some people say that to be a good security person you need to “think like a hacker.” I disagree and I believe the so-called “hacker mindset” has a place, but not the only place. I think it’s also important to think like a defender. That is what Caine did in this story.

Change the Plane Thursday, August 16, 2007 Call me militaristic, but I love the History Channel series Dogfights. I hope the Air Force Academy builds an entire class around the series. I just finished watching an episode titled "Gun Kills of Vietnam." The show featured two main engagements. Both demonstrated a concept I described in Fight to Your Strengths. In the first battle two A-1H Skyraiders (prop planes) shot down a MiG-17 (a jet) using their cannons. The Skyraiders survived their initial encounter with the MiG by out-turning it at low speeds. They made the MiG fight their fight, and the MiG lost. In the second battle, an F-4 flown by pilot Darrell "Dee" Simmonds and backseater George McKinney Jr. downed another MiG-17 using their gun. In that fight, the slower but more maneuverable MiG-17 was out-turning the F4. In the show McKinney said a less experienced pilot would have fought the MiG's fight by trying to turn with the MiG, probably giving the MiG an opportunity to down the F-4 when the F-4 overshot the MiG. Instead, a highly skilled pilot would act differently. In Simmonds' words: You can not turn with him... you have to get into another plane. The "plane" in this case is geographic, not the actual fighter plane. The F4 leaves the X-Y plane and enters Z, the vertical plane. Simmonds put the F-4 into a "high yo-yo." The image above shows the technique, which can also be seen at the Dogfights clips page. Coming out of the yo-yo put the F-4 right behind the MiG, allowing Simmonds to shoot it down. Of course this made me think about digital security. We are constantly trying to fight the black hat's fight. We should instead "change the plane." What does this mean in actionable terms? I'm not sure yet. Obviously in air combat it's not about surviving the enemy onslaught and never shooting back. Maybe it's time security researchers concentrate on vulnerabilities in the tools used by intruders, like what the Shmoo Group

presented at Def Con 13, e.g., multihtml exploit vulnerability advisory? Ideally law enforcement would be striking back for us, but we're still in Wild West mode until LEAs catch up. What do you think -- how could you change the plane? https://taosecurity.blogspot.com/2007/08/change-plane.html Commentary Wow, I loved that show Dogfights. I have the DVDs somewhere and should watch them. I liked this comment: “Maybe it's time security researchers concentrate on vulnerabilities in the tools used by intruders, like what the Shmoo Group presented at Def Con 13, e.g., multihtml exploit vulnerability advisory?” How about it, “security researchers?” Instead of spending so much time finding vulnerabilities in software that people use to accomplish legitimate work, why don’t you turn your attention to the software criminals use to take advantage of everyone?

Does Failure Sell? Tuesday, December 18, 2007 I often find myself in situations trying to explain the value of Network Security Monitoring (NSM). This very short fictional conversation explains what I mean. This exchange did not happen but I like to contemplate these sorts of dialogues. NSM Advocate: I recommend deploying network-based sensors to collect data using NSM principles. I will work with our internal business units to select network gateways most likely to yield significant traffic. I will build the sensors using open source software on commodity hardware, recycled from other projects if need be. Manager: Why do we need this? NSM Advocate: Do you believe all of your defensive measures are 100% effective? Manager: No. (This indicates a smart manager. Answering Yes would result in a line of reasoning on why Prevention Eventually Fails.) NSM Advocate: Do you want to know when your defensive measures fail? Manager: Yes. (This also indicates a smart manager. Answering No would result in a line of reasoning on why ignorance is not bliss.) NSM Advocate: NSM will tell us when we fail. NSM sensors are the highest impact, least cost way to obtain network situational awareness. NSM methodologies can guide and validate preventative measures, transform detection into an actionable process, and enable rapid, lowcost response. Manager: Why can't I buy this?

NSM Advocate: Some mainstream vendors are realizing a market exists for this sort of data, and they are making some impact with new products. If we had the budget I might propose acquiring a commercial solution. For the moment I recommend pursuing the do-it-yourself approach, with transition to a commercial solution if funding and product capabilities materialize. Manager: Go forth and let your sensors multiply. Now you know that it's fiction. Notice the crux of the argument is here: Do you believe all of your defensive measures are 100% effective? As a statement, one would say Because prevention eventually fails, you should have a means to identify intrusions and expedite remediation. A manager hearing that statement is likely to respond like this. Manager: Do you mean to tell me that all of the money I've spent on firewalls, intrusion prevention systems, anti-virus, network access control, etc., is wasted? NSM Advocate: That money is not wasted. It's narrowed the problem space, but it hasn't eliminated the problem. This is a tough argument to accept. When I worked at Foundstone the company sold a vulnerability management product. Foundstone would say "buy our product and you will be secure!" I worked for the incident response team. We would say "...and when you still get owned, call us." Which aspect of the business do you think made more money, got more attention, and received more company support? That's an easy question. How is a salesperson supposed to look a prospect in the eye and say "You're going to lose. What are you going to do about it?" Many businesses are waking up to the fact that they've spent millions of dollars on preventative measures and they still lose. No one likes to be a loser. The fact of the matter is that winning cannot be defined as zero intrusions. Risk mitigation does not mean risk elimination. Winning has to be defined using the words I used to explain risk in my first book:

Security is the process of maintaining an acceptable level of perceived risk. This definition does not eliminate intrusions from the enterprise. It does leave an uncomfortable amount of interpretation for the "acceptable level" aspect. You may have noticed that most of the managers one might consider successful are usually self-described or outwardly praised as being risktakers. On the other side of the equation we have security professionals, most of whom I would label as risk-avoiders. The source escapes me now, but a recent security magazine article observed that those closest to the hands-on aspects of security rated their companies as being the least secure. Assessments of company security improved the farther one was removed from day-to-day operations, such that the CIO and above was much more positive about the company's security outlook. The major factor in this equation is probably the separation between the corner office and the cubicle, but another could be the acceptable level of risk for the parties involved. When a CIO or CEO is juggling market risk, credit risk, geo-political risk, legal risk, and other worries, digital risk is just another item in the portfolio. The difference between digital risk and many of the other risk types is the consequences can be tough to identify. In fact, the more serious the impact, the least likely you could be to discover the intrusion. How is that possible? What causes more damage: a DDoS attack that everyone notices because "the network is slow," or a stealthy economic competitor whose entire reason in life is to avoid detection while stealing data? Without evidence to answer the question “are you secure?”, managers practice management and defense by belief instead of management and defense by fact. https://taosecurity.blogspot.com/2007/12/does-failure-sell.html Commentary

We could have the same exchanges today. The main difference is that the NSM toolsets have matured to the point where it is less efficient for many shops to design, build, maintain, and operate their own tools.

Security: Whose Responsibility? Wednesday, May 21, 2008 I assume readers of this blog are familiar with the "CIA" triad of information security: confidentiality, integrity, and availability. Having spent time with many companies in consulting and corporate roles, it occurred to me recently that two or even all three of these functions are no longer, or may never have been, the responsibility of the "security" team. Let's examine each item in turn. Availability is probably the defining aspect of IT. If the resource isn't available, no one cares about much else. Availability problems are almost exclusively the responsibility of IT, with "uptime" being their primary metric. One would expect confidentiality to be fairly central to any "security" team's role. Exfiltration of data is partly a confidentiality problem. However, the biggest headache in the confidentiality world has been disclosure of customer personally identifiable information (PII) via loss or theft of physical assets (laptops, backup tapes) or electronic exposure. Companies now employ dedicated Privacy teams, usually staffed predominantly by lawyers, to specifically address the handling of customer PII. One might have thought the "Security" team should have had responsibility for this subject. Instead, a legal problem ("Do we have to disclose the breach to customers and/or the public?") is being addressed by lawyers. Integrity is the last of the three, and originally I thought it would be the core "security" task for the "Security" team. Then I remembered SarbanesOxley and Section 404. I wondered if the Audit Staff's requirement to assess the "integrity" of records meant they had a more institutionalized role in this area than the "Security" team. So what does this mean for "Security" teams? Looking at the problem in one way, you might think there is no need for a Security team. CIA is

covered by three groups, so Security is redundant. This is a mistake for the following reasons. The IT staff is not equipped to resist attacks, especially advanced ones. IT usually does a good job keeping resources functioning when equipment failure, provisioning woes, or misconfiguration causes downtime. IT is usually ill-equipped to stop intelligent adversaries who are three steps ahead of overworking administrators. If an IT staff can't handle attackers, there's no way lawyers can. Lawyers tasked with security responsibilities usually outsource everything to highpriced consultants. Legal teams have the budgets for this, but it's not a sustainable situation. Privacy teams focus on salvaging the company brand and market value after a breach; they are not positioned to resist or detect incidents. Auditors look for problems and effect change, but they do not implement change. They look for weaknesses in processes and configurations, not intruders who have exploited those vulnerabilities. I believe this state of affairs leaves the Security team as the one group that has the proper mindset, subject matter expertise, and ability to implement defensive operations to preserve CIA. This mission is not one the Security team accomplishes by itself, if that ever were possible. Rather, Security will (if not already) need to pair itself with IT, Audit, and Privacy in order to be effective. One could say the same for and Compliance groups, Governance officers, and/or Physical Security teams, although I'm less worried about those ties right now. It should be clear at this point that it doesn't make sense for the Security team to work for IT, given the role it must play. A Security team working for IT is likely to be stuck supporting the Availability aspect of "security" at the expense of the other CIA elements. Furthermore, it could be difficult for Security to build the necessary bonds with Audit and Privacy if those groups see the Security team as "just part of IT," or "technologists." In this light, it makes sense for Security (CISO) to be next to IT (CTO) in the corporate hierarchy, both working for the CIO. Ultimately the CIO is

responsible for the company's information, so I don't see a way for [information] Security to be beyond the CIO's reach. How does this review compare to your own experience? https://taosecurity.blogspot.com/2008/05/security-whoseresponsibility.html Commentary Here’s the TL;DR: “[T]his state of affairs leaves the Security team as the one group that has the proper mindset, subject matter expertise, and ability to implement defensive operations to preserve CIA. This mission is not one the Security team accomplishes by itself, if that ever were possible. Rather, Security will (if not already) need to pair itself with IT, Audit, and Privacy in order to be effective.”

Response: Is Vulnerability Research Ethical? Friday, May 23, 2008 One of my favorite sections in Information Security Magazine is the "face-off" between Bruce Schneier and Marcus Ranum. Often they agree, but offer different looks at the same issue. In the latest story, Face-Off: Is vulnerability research ethical?, they are clearly on different sides of the equation. Bruce sees value in vulnerability research, because he believes that the ability to break a system is a precondition for designing a more secure system: [W]hen someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good. This is a classic cryptographic mindset. To a certain degree I could agree with it. From my own NSM perspective, a problem I might encounter is the discovery of covert channels. If I don't understand how to evade my own monitoring mechanisms, how am I going to discover when an intruder is taking that action? However, I don't think being a ninja "breaker" makes one a ninja "builder." My "fourth Wise Man," Dr Gene Spafford, agrees in his post What Did You Really Expect?: [S]omeone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior... Firms that hire “reformed” hackers to audit or guard their systems

are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids. First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit. Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled. (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way — that behavior does not establish the bona fides for real expertise. If anything, it establishes a disregard for the community it endangers.) More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on. So, in some ways I agree with Bruce, but I think Gene's argument carries more weight. Read his whole post for more. Marcus' take is different, and I find one of his arguments particularly compelling: Bruce argues that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not-the last 20 years of software development (don't call it "engineering," please!) absolutely refutes this position... The biggest mistake people make about the vulnerability game is falling for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing to Web 2.0 as an example. Has what we've learned about writing software the last 20 years been expressed in the design of Web 2.0? Of course not! It can't even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, Web 2.0 would not be happening. If Bruce's argument is that vulnerability "research" helps teach

us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the latter is happening--and it scares me. (emphasis added) I agree with 95% of this argument. The 5% I would change is that identifying vulnerabilities addresses problems in already shipped code. I think history has demonstrated that products ship with vulnerabilities and always will, and that the vast majority of developers lack the will, skill, resources, business environment, and/or incentives to learn from the past. Marcus unintentionally demonstrates that analog security is threat-centric (i.e., the real world focuses on threats), not vulnerability-centric, because vulnerability-centric security perpetually fails. https://taosecurity.blogspot.com/2008/05/response-to-is-vulnerabilityresearch.html Commentary This is so refreshing. These days we seldom have these discussions. Instead, “security researchers” drop offensive code of all types on Github and elsewhere, and joke that they are "Just making sure to keep up job security for everyone haha". Thanks a lot.

On Breakership Tuesday, September 16, 2008 Last week Mark Curphey asked Are You a Builder or a Breaker. Even today at RAID 2008, the issue of learning or teaching offensive techniques ("breakership") was mentioned. I addressed the same issue a few months ago in Response to Is Vulnerability Research Ethical. Mark channels the building architecture theme by mentioning Frank Lloyd-Wright. I recommend reading my previous post for comprehensive thoughts, but I'd like to add one other component. Two years I wrote Digital Security Lessons from Ice Hockey where I made a case for defenders to develop offensive skills in order to be "well-rounded." Why is that? Turning to the building architecture idea Mark mentioned, why don't classical architects learn "offense," i.e., why aren't they "well-rounded"? It turns out that classical architects do learn some "offense," except they limit themselves to the natural physics of their space and less on what an intelligent adversary might do. In other words, architects learn about various forces and the limits of their building materials, but usually not how to design a building that could withstand a Tomahawk Land Attack Missile (TLAM). Of course there are a very few number of people who do learn how to design structures that can withstand TLAMs, but most architects do not. Digital architects are waking up to the fact that they face the equivalent of digital TLAMs constantly. Any system connected to the Internet, or could be connected to the Internet one day, are vulnerable to digital TLAMs. Therefore, digital architects need to know how these weapons work so they can better build their systems. It turns out that classical architects must also learn something about intelligent adversaries, especially as the terrorism threat occupies greater mindshare and drives building codes. Mindshare can be transitory but building codes are persistent. Even if we build mindshare or attention to security issues in the digital space, we still lack a "building code." That

means we will probably remain vulnerable. https://taosecurity.blogspot.com/2008/09/on-breakership.html Commentary This must have been a high point in ethical debates of security research -two posts in one year!

Humans, Not Computers, Are Intrusion Tolerant Sunday, February 01, 2009 Several years ago I mentioned the human firewall project as an example of a security awareness-centric defensive measure. I thought it ironic that the project was dead by the time I looked into it. On a similar note, I was considering the idea of intrusion tolerance recently, loosely defined as having a system continue to function properly despite being compromised. A pioneer in the field describes the concept thus: Classical security-related work has on the other hand privileged, with few exceptions, intrusion prevention... [With intrusion tolerance, i]nstead of trying to prevent every single intrusion, these are allowed, but tolerated: the system triggers mechanisms that prevent the intrusion from generating a system security failure. It occurred to me recently that, in one sense, we have already fielded intrusion tolerant systems. Any computer operated, owned, or managed by a person who doesn't care about its integrity is an intrusion tolerant system. People tolerate the intrusion for various reasons, such as: "I don't think any threats are attacking me." "I don't see my system or information being disclosed / degraded / denied." "I don't have anything valuable on my system." All of those are false, but intrusion tolerant systems (meaning the human plus the hardware and software) tolerate intrusions. What's worse is that modern threats understand these parameters and seek to work within them, rather than do something stupid like open and close a CD-ROM tray or waste

bandwidth, tipping off the human by interfering with the operation of the system. https://taosecurity.blogspot.com/2009/02/humans-not-computers-areintrusion.html Commentary Like my friend Aaron Higbee at Cofense, I detest hearing that “humans are the weakest link” in security. Humans are an asset. They are sometimes the best methods of identifying suspicious or malicious activity. People are intrusion tolerant, indeed.

Speaking of Incident Response Saturday, April 18, 2009 In my last post I mentioned I will be speaking at another SANS IR event this summer. I just noticed a post on the ISC site titled Incident Response vs. Incident Handling. It states: Incident Response is all of the technical components required in order to analyze and contain an incident. Incident Handling is the logistics, communications, coordination, and planning functions needed in order to resolve an incident in a calm and efficient manner. That's not right, and never was. I tried pointing that out via a comment on the ISC post, but apparently the moderators aren't willing to accept contradictory comments. Incident response and incident handling are synonyms. If you need to differentiate between the role that does technical work and one which does leadership work, you can use incident response/handling for the former and incident management for the latter. Ten years ago I took a course at CERT called Advanced Computer Security Incident Handling for Technical Staff. The class covered technical methodologies for responding to and handling incidents. The successor to that class is Advanced Incident Handling. Notice that CERT also offers the CERT®-Certified Computer Security Incident Handler certification. To CERT, incident response and incident handling are synonyms. If anyone should understand incidents, it's CERT. I think SANS is the organization that needs to examine how it uses the term incident handler or incident handling. The GIAC Certified Incident Handler (GCIH) designation is 83% inappropriate. How do I arrive at that figure? If you review the day-by-day course overview you'll see that only one day, the first, involves Incident Handling Step-by-Step and Computer Crime

Investigation. The next four days are Computer and Network Hacker Exploits, with the sixth day being an open lab. So, 5/6 of the class has little to nothing to do with incident response/handling. This is a problem for three reasons. First, I have met people and heard of others who think they know how to "handle incidents" because they have the GCIH certification. "I'm certified," they say. This is dangerous. Second, respondents to the latest SANS 2008 Salary Survey considered their GCIH certification to be their most important certification. If you hold the GCIH and think it's important because you know how to "handle incidents," that is also dangerous. Third, SANS offers courses with far more IR relevance than that associated with GCIH, namely courses designed by Rob Lee. It's an historical oddity that keeps the name GCIH in play; it really should be retired, but there's too much "brand recognition" associated with it at this point. If you want to learn IR from SANS, see Rob. To be fair, the title for the course which prepares students for the GCIH is Hacker Techniques, Exploits & Incident Handling. Putting IH at the end does list the subject in the proper context. I will also not deny that one should understand hacker techniques and exploits in order to do incident response/handling, but that knowledge should be its own material -something to know in addition to the skills required for IR. Also, track 504 is really good; I remember it fondly, before it had that label. The material is kept fresh and the instructors are excellent. The bottom line is that incident handling and response are synonyms, and those who think they are certified to do incident handling and response via GCIH are kidding themselves. https://taosecurity.blogspot.com/2009/04/speaking-of-incidentresponse.html Commentary Look, a post where I disagree with SANS! It’s probably time to share a story that happened 20 years ago and explains my tenuous history with this organization. Although I briefed AFCERT teams on technical topics in 1999, I

delivered my first public technical talk to a SANS audience, on 25 March 2000. It was based on a paper titled “Interpreting Network Traffic: A Network Intrusion Detector's Look at Suspicious Events.” I spoke at the SANS 2000 Technical Conference. What I’ve kept quiet these past twenty years is that Stephen Northcutt was not happy with that paper. I contradicted what he had written in his first book on intrusion detection. I had a phone call with someone from SANS prior to the event. It may have been Stephen, but I can’t remember at this point. The tone of the conversation implied that SANS wanted to make sure I wouldn’t embarrass anyone. That is not my style. I was more interested in analysts not reporting so-called “RST scans” because they had read about them in a book and on the SANS web site. I would later help SANS by ultimately teaching the entire SANS intrusion detection course, albeit not straight through. I first taught day four in San Antonio, Texas on 14 March 2002, after Marty Roesch was unable to do so. I taught day four again in Toronto, Ontario on 16 May 2002. Next I taught days one, two, and three in San Antonio from 15-17 July 2002, then days four, five, and six again in San Antonio from 28-30 January 2003. I abandoned SANS after that because I could not come to grips with teaching their material. The last time I taught “SANS material,” I asked the class if they wanted to learn the slides, or what was happening in the real world. When they responded “real world,” I invented a new class on the spot using hands-on exercises. Only later did I come to understand the incentives associated with having a SANS instructor create a class, and then having others teach those slides. I decided it was not for me, so I had limited interactions with SANS for most of the rest of my career. I did keynote at a few SANS events and even organized and led the SANS WhatWorks in Incident Detection Summit 2009, held 9-10 Dec 2009. SANS has done a lot of great work educating security professionals over the years, but I never really fit with some of their key players and methods.

Defender's Dilemma vs Intruder's Dilemma Saturday, May 23, 2009 This is a follow-up to my post Response for Daily Dave. I realized I had a similar exchange three years ago, summarized in my post Response to Daily Dave Thread. Since I don't seem to be making much progress in this debate, I decided to render it in two slides. First, I think everyone is familiar with the Defender's Dilemma. The intruder only needs to exploit one of the victims in order to compromise the enterprise. You might argue that this isn't true for some networks, but in most places if you gain a foothold it's quickly game over elsewhere. What Dave and company don't seem to appreciate is that there is a similar problem for attackers. I call it the Intruder's Dilemma. The defender only needs to detect one of the indicators of the intruder’s presence in order to initiate incident response within the enterprise. What's interesting about this reality is that it applies to a single system or to a collection of systems. Even if the intruder only comprises a single system, the variety of indicators available make it possible to detect the attacker. Knowing where and when to look, and what to look for, becomes the challenge. However, as the scope of the incident expands to other systems, the probability of discovery increases. So, perversely, the bigger the incident, the more likely someone is going to notice. Whether or not you can actually detect the intruder's presence depends on the amount of visibility you can achieve, and that is often outside the control

of the security team because the security team doesn't own computing assets. However, this point of view can help you argue why you need the visibility to detect and respond to intrusions, even though you can't prevent them. https://taosecurity.blogspot.com/2009/05/defenders-dilemma-andintruders-dilemma.html Commentary This is one of my favorite posts, and I’ve seen and heard that many people have found it useful over the years. They’ve used it to justify NSM and other instrumentation and security operations projects, which makes me glad. It’s still true today, whatever the medium involved.

Offense and Defense Inform Each Other Sunday, June 21, 2009 If you've listened to anyone talking about the Top 20 list called the Consensus Audit Guidelines recently, you've probably heard the phrase "offense informing defense." In other words, talk to your Red Team / penetration testers to learn how they can compromise your enterprise in order to better defend yourself from real adversaries. I think this is a great idea, but there isn't anything revolutionary about it. It's really just one step above the previous pervasive mindset for digital security, namely identifying vulnerabilities. In fact, this neatly maps into my Digital Situational Awareness ranking. However, if you spend most of your time writing policy and legal documents, and not really having to deal with intrusions, this idea probably looks like a bolt of lightning! And speaking of the Consensus Audit Guidelines: hey CAG! It's the year 2000 and the SANS Top 20 List wants to talk to you! The SANS/FBI Top Twenty list is valuable because the majority of successful attacks on computer systems via the Internet can be traced to exploitation of security flaws on this list... In the past, system administrators reported that they had not corrected many of these flaws because they simply did not know which vulnerabilities were most dangerous, and they were too busy to correct them all... The Top Twenty list is designed to help alleviate that problem by combining the knowledge of dozens of leading security experts from the most security-conscious federal agencies, the leading security software vendors and consulting firms, the top university-based security programs, and CERT/CC and the SANS Institute. Expect at some point to hear Beltway Bandits talking about how we need to move beyond talking to the Red Team and how we need to see who is

actively exploiting us. Guess what -- that's where the detection and response team lives. Perhaps at some point these "thought leaders" will figure out the best way to defend the enterprise is through counterintelligence operations, like the police use against organized crime? For now, I wanted to depict that while it is indeed important for offense to inform defense, the opposite is just as critical. After all, how is the Red Team supposed to simulate the adversary if it doesn't know how the adversary operates? A good Red Team can exploit a target using methods known to the Red Team. A great Red Team can exploit a target using methods known to the adversary. Therefore, I created an image describing how offense and defense inform each other. This assumes a sufficiently mature, resourced, and capable set of security teams. This post may sound sarcastic but I'm not really bitter about the situation. If we keep making progress like this, in 3-5 years the mindset of the information security community will have evolved to where it needed to be ten years ago. I'll keep my eye on the Beltway Bandits to let you know how things proceed. https://taosecurity.blogspot.com/2009/06/offense-and-defense-informeach-other.html Commentary I wrote “This post may sound sarcastic but I'm not really bitter about the situation.” No, I was bitter. I’m still not happy about this “offense informing defense” mantra. The title reflects what should really happen: “Offense and Defense Inform Each Other.” Anything else is pandering to the offense. I’d like to point out that I wrote something incorrect in this post, and I hope that if I say something similar elsewhere in this volume I catch it. I wrote “ the best way to defend the enterprise is through counterintelligence operations, like the police use against organized crime.” That is not correct. “Counterintelligence” means what the word says: counter intelligence, or stopping/hindering/countering intelligence teams. Counterintelligence is not an operation informed by intelligence to counter an adversary. Those are just operations. If your operation is directed against the intelligence units of an adversary, that is counterintelligence.

The Centrality of Red Teaming Sunday, June 21, 2009 In my last post I described how a Red Team can improve defense. I wanted to expand on the idea briefly. First, I believe the modern enterprise is too complex for any individual or group to thoroughly understand how it can be compromised. There are so many links in the chain that even knowing they exist, let alone how they connect, can be impossible. To flip that on its end, in a complementary way, the modern enterprise is too complex for any individual or group to thoroughly understand how its defenses can fail. The fact that vendors exist to reduce firewall rule sets down to something intelligible by mere mortals is a testament to the apocalyptic fail exhibited by digital defenses. Furthermore, it is highly likely that hardly anyone cares about attack models until they have been demonstrated. We see this repeatedly with respect to software vulnerabilities. It can be difficult for someone to take a flaw seriously until a proof of concept is shown to exploit a victim. L0pht's motto "Making the theoretical practical since 1992" is a perfect summarization of this phenomenon. So why mention Red Teams? They are central to digital defense because Red Teams transform theoretical intrusion scenarios into reality in a controlled and responsible manner. It is much more realistic to use your incident detection and response teams to know what adversaries are actually doing. However, if you want to be more proactive, you should deploy your Red Team to find and connect those links in the chain that result in a digital disaster. https://taosecurity.blogspot.com/2009/06/centrality-of-red-teaming.html Commentary Just when you think I don’t like offensive security teams, I write a post

like this! Of course I like red teams, when they operate in the manner described in this post. Red teaming is a high-end activity that has a proper place in security strategy, operations, and tactics, but it is not where I would start when defending an enterprise. One should begin with determining if the organization and its assets are already compromised, via a “compromise assessment.” After all, what is the point of the rest of security?

The Problem with Automated Defenses Tuesday, June 23, 2009 Automation is often cited as a way to "do more with less." The theory is that if you can automate aspects of security, then you can free resources. This is true up to a point. The problem with automation is this: Automated defenses are the easiest for an intruder to penetrate, because the intruder can repeatedly and reliably test attacks until he determines they will be successfully and potentially undetectable. I hope no one is shocked by this. In a previous life I worked in a lab that tested intrusion detection products. Our tests were successful when an attack passed by the detection system with as little fuss as possible. That's not just an indictment of "IDS"; that approach works for any defensive technology you can buy or deploy off-the-shelf, from anti-malware to host IPS to anything that impedes an intruder's progress. Customization and localization helps make automation more effective, but that tends to cost resources. So, automation by itself isn't bad, but mass-produced automation can provide a false sense of security to a certain point. In tight economic conditions there is a strong managerial preference for the so-called self-defending network, which ends up being a self-defeating network for the reason in bold. A truly mature incident detection and response operation exists because the enterprise is operating a defensible network architecture, and someone has to detect and respond to the failures that happen because prevention eventually fails. CIRTs are ultimately exception handlers that deal with everything that falls through the cracks. The problem happens when the cracks are the size of the Grand Canyon, so the CIRT deals with intrusions that should have been stopped by good IT and security practices. https://taosecurity.blogspot.com/2009/06/problem-with-automateddefenses.html

Commentary So-called “artificial intelligence” or AI is the latest in the drive towards automating security. We will never fully automate defense but we can certainly benefit from automation. The best answer is a hybrid of humans and tools, as has been the case since the first cave person picked up a pointy stick or rock.

Incident Detection Mindset Thursday, August 13, 2009 Often you will read or hear about a "security mindset," but this is frequently an "offensive security mindset." This attitude is also called a "breaker" mindset, described in my old post On Breakership. The offensive security mindset means looking at features of the physical or digital worlds and reflexively figuring out ways to circumvent their security or lack of security. Johnny Long is one example of a person with this mindset -- pretty much every place he looks he is figuring out a way to profile or subvert what he sees! To a certain extent this mindset can be taught, although one could argue that truly exceptional offensive security pros have this mindset embedded in their DNA. It occurred to me today, after writing Build Visibility In, that I have a different mindset. I have an incident detection mindset. Often when I interact with the physical or digital worlds, I reflexively wonder how can I tell if this feature is trustworthy? For example, when I first received my Corporate laptop, I wondered "how can I tell if this box is owned?" When I received my Blackberry, I wondered "how can I tell when this device is owned?" In other words, if the device is compromised, it is not trustworthy. How can I tell? The prevailing security mindset is a "defensive security mindset," where security people are taught to plan for and resist incidents. This attitude is necessary but not sufficient. We need people who plan for and resist incidents, people who can detect and respond to incidents, and people who can think offensively to assist those who work defensively. I believe all three of these mindsets can be taught, but of the three I think the incident detection mindset is the rarest. Working to develop an incident detection mindset is one of the goals of this blog, and of posts like this one and the last. https://taosecurity.blogspot.com/2009/08/incident-detection-mindset.html

Commentary What is your mindset? Did you think about it before today? Has it changed over time? Where did it come from?

Protect the Data Idiot! Saturday, October 10, 2009 The 28 September 2009 issue of InformationWeek cited a comment posted to one of their forums. I'd like to cite an excerpt from that comment. “[W]e tend to forget the data is the most critical asset. yet we spend inordinate time and resources trying to protect the infrastructure, the perimeter... the servers etc. I believe and [sic] information-centric security approach of protecting the data itself is the only logical approach to keep it secure at rest, in motion and in use.” I hear this "protect the data" argument all the time. I think it is one of the most misinformed comments that one can make. I think of Chris Farley smacking his head saying "IDIOT!" when I hear "protect the data." "Oh right, that's what we should have been doing for the last 10, 20, 30 years -- protect the data! I feel so stupid to have not done that! IDIOT!" "Protect the data" represents a nearly fatal understanding of information security. I'm tired of hearing it, so I'm going to dismantle the idea in this post. Now that I've surely offended someone, here are my thoughts. Someone show me "data." What is "data" anyway? Let's assume it takes electronic form, which is the focus of digital security measures. This is the first critical point: Digital data does not exist independently of a container. Think of the many containers which hold data. Imagine looking at a simple text file retrieved from a network share via NFS and viewed with a text editor. Data exists as an image rendered on a screen attached to the NFS

client. Data exists as a temporary file on the hard drive of the NFS client, and as a file on the hard drive of the NFS server. Data exists in memory on the NFS client, and in memory on the NFS server. The NFS client and server are computers sitting in facilities. Network infrastructure carries data between the NFS client and server. Data exists as network traffic exchanged between the NFS client and server. If the user prints the file, it is now contained on paper (in addition to involving a printer with its own memory, hard drive, etc.) The electromagnetic spectrum is a container for data as it is transmitted by the screen, carried by network cables and/or wireless media, and so on. That's eight unique categories of data containers. Some smart blog reader can probably contribute two others to round out the list at ten! So where exactly do we "protect the data"? "In motion/transit, and at rest" are the popular answers. Good luck with that. Seriously. This leads to my second critical point: If an authorized user can access data, so can an unauthorized user. Think about it. Any possible countermeasure you can imagine can be defeated by a sufficiently motivated and resourced adversary. One example: "Solution:" Encrypt everything! Attack: Great, wait until an authorized user views a sensitive document, and then screen-scrape every page using the malware installed last week. If you doubt me, consider the "final solution" that defeats any security

mechanism: Become an authorized user, e.g., plant a mole/spy/agent. If you think you can limit what he or she can remove from a "secure" site, plant an agent with a photographic memory. This is an extreme example but the point is that there is no "IDIOT" solution out there. I can make rational arguments for a variety of security approaches, from defending the network, to defending the platform, to defending the operating system, to defending the application, and so on. At the end of the day, don't think that wrapping a document in some kind of rights management system or crypto is where "security" should be heading. I don't disagree that adding another level of protection can be helpful, but it's not like intruders are going to react by saying "Shucks, we're beat! Time to find another job." Intruders who encounter so-called "protect the data" approaches are going to break them like every other countermeasure deployed so far. It's just a question of how expensive it is for the intruder to do so. Attackers balance effort against "return" like any other rational actor, and they will likely find cheap ways to evade "protect the data" approaches. Only when relying on human agents is the cheapest way to steal data, or when it's cheaper to research and develop one's own data, will digital security be able to declare "victory." I don't see that happening soon; no one in history has ever found a way to defeat crime, espionage, or any of the true names for the so-called "information security" challenges we face. https://taosecurity.blogspot.com/2009/10/protect-data-idiot.html Commentary I love this: “Now that I've surely offended someone, here are my thoughts.” That’s the story of quite a bit of this blog. I was certainly not very tolerant of illogical or silly ideas, and even today I encounter the same “protect the data” mindset, as if it’s some deep insight.

Protect the Data from Whom? Sunday, October 11, 2009 This is a follow-on from my "Protect the Data" Idiot! post. Another question to consider when someone says "protect the data" is this: "from whom?" The answer makes all the difference. I remember a conversation I overheard or read involving Marcus Ranum and a private citizen discussing threats from nation-state actors. Questioner: How do you protect yourself from nation-state actors? MJR: You don't. Q: What do you do then? MJR: You lose. In other words, private citizens (and most organizations who are not nation-state actors) do not have a chance to win against a sufficiently motivated and resourced high-end threat. The only actors who have a chance of defending themselves against high-end threats are other nation-state actors. Furthermore, the defenders don't necessarily have a defensive advantage over average joes because the nation-state possesses superior people, products, or processes. Many nation-state actors are deficient in all three. Rather, nation-state actors can draw on other instruments of power that are unavailable to average joes. I outlined this approach in my posts The Best Cyber-Defense, Digital Situational Awareness Methods, and Counterintelligence Options for Digital Security: [T]he best way to protect a nation's intelligence from enemies is to attack the adversary's intelligence services. In other words, conduct aggressive counterintelligence to find out what the enemy knows about

you. In the "protect the data" scenario, this means knowing how the adversary can access the containers holding your data. Nation-states are generally the only organizations with the discipline, experience, and funding to conduct these sorts of CI actions. They are not outside the realm of organized crime or certain private groups with CI backgrounds. To summarize, it makes no sense to ponder how to "protect the data" without determining what adversaries want it. If we unify against threats we can direct our resources against the adversaries we can possibly counter independently, and then petition others (like our governments and law enforcement) to collaborate against threats that outstrip our authority and defenses. https://taosecurity.blogspot.com/2009/10/protect-data-from-whom.html Commentary I was relieved to read that I discussed counterintelligence in the proper manner, e.g. “[T]he best way to protect a nation's intelligence from enemies is to attack the adversary's intelligence services. In other words, conduct aggressive counterintelligence to find out what the enemy knows about you.” As you saw earlier, counterintelligence means countering adversary intelligence teams.

Protect the Data -- Where? Sunday, October 11, 2009 I forgot to mention another thought in my last post "Protect the Data" from Whom? Intruders are not mindly attacking systems to access data. Intruders direct their efforts toward the sources that are easiest and cheapest to exploit. This produces an interesting corollary. Once other options have been eliminated, the ultimate point at which data will be attacked will be the point at which it is useful to an authorized user. For example, if a file is only readable once it has been decrypted in front of a user, that is where the intruder will attack once his other options have been exhausted. This means that the only way to completely "protect data" is to make it unusable. If data is not usable then it doesn't need to exist, so that means intruders will always be able to access data if they are sufficiently resourced and motivated, as explained in my first post on this subject. https://taosecurity.blogspot.com/2009/10/protect-data-where.html Commentary I’ll save my comments for the last post in this quaternary.

Protect the Data -- What Data? Tuesday, October 13, 2009 This is another follow-on from my "Protect the Data" Idiot! post. If you think about the "protect the data" mindset, it's clearly a response to the sorts of data loss events that involve "records" -- credit card records, Personally Identifiable Information (PII), and the like. In fact, there's an entire "product line" built around this problem: data loss prevention. I wrote about DLP earlier this year in response to the rebranding effort taken by vendors to make whatever they sold part of the DLP "solution." What's interesting to me about "protect the data" in this scenario is this: "what data?" Is your purpose in life to keep PII or other records in a database? That's clearly a big problem, but it doesn't encompass the whole security problem. What about the following? Credentials used to access systems. For example, intruders often compromise service accounts that have wide-ranging access to enterprise systems. Those credentials can be retrieved from many locations. How do you protect those? Systems that don't house PII or other records, but do serve critical functions. Your PBX, HVAC control system, routers, other network middleboxes, etc., are all important. Try accessing "data" without those devices working. Data provided by others. The enterprise isn't just a data sink. Users make decisions and work by relying on data provided by others. Who or what protects that data? Those are three examples. If you spend time thinking about the problem you can probably identify many other forms of data that are outside the "DLP" umbrella, and outside the "protect the data" umbrella. https://taosecurity.blogspot.com/2009/10/protect-data-what-data.html

Commentary I hope this four series of posts show how empty and useless the phrase and concept “protect the data” really is. It adds nothing to the discussion.

Cyberwar Is Real Sunday, July 04, 2010 A number of people, inside and outside the security world, think that any discussion of real threats is a manufactured justification for intrusive government action. Their argument is simple. The government wants to control the people, or obtain a resource, or pursue some objective that could not be reasonably achieved if transparently presented to the citizenry. The government "propaganda machine," sometimes in coordination with "the media" and "big business," "manufactures" a "crisis" whose only solution is increased government power. The people acquiesce in order to preserve their safety, and the government achieves its objective. As a result, those who see the world in this manner treat any discussion of real threats as step 2 in this process towards decreased liberty via increased government power. Those who seek to inform the citizenry of real threats are dismissed as sowing "FUD." This is a tragedy, because it means that we continue to suffer at the hands of real threats who laugh while pillaging their target. Yes, there are surely those in government who see any crisis as an opportunity to advance their agenda. Yes, governments have manufactured threats in the past to justify action. I am a history major so I am well schooled in these events, and as a libertarian I am suspicious of the government. However, I am not blinded to reality, unlike those who choose to dismiss threats as "simple espionage" and the like. In the past I've been somewhat ambiguous about cyberwar. Starting now,

I've decided to say it: cyberwar is real. The reason some others aren't willing to say this is because they are keeping their minds narrowed to historical definitions of war, or they are not aware of the "facts on the ground," or they choose to ignore facts because they see them as elements of "step 2" and thereby inherently false. I mentioned in a recent post that Attrition.org has decided to ridicule those who quote Sun Tzu, and I largely agree. At the micro level of civilian defense of corporate systems, where defenders cannot strike back, "war" does not seem to be the correct paradigm, so Sun Tzu fails as a way to interpret enterprise defense. However, at the level of nation states, the entities which wage war, Sun Tzu is as applicable as ever. And this is the problem with those who dismiss cyberwar; they think that without bullets being fired, there is no war. Sun Tzu would laugh at that: For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill. Bruce Lee, and before him Tsukahara Bokuden understood that "fighting without fighting" is the highest form of war. Cyberwar, therefore, may be seen as a means to subdue the enemy without traditional "fighting." It's likely that if those who dismiss cyberwar as "simple espionage" gain the political and philosophical high ground, and threats continue to ravage their victims, no bullets would ever need to be fired. The victim would not need to be "conquered" by traditional means; physical "war" would be redundant. Does all this mean I agree with government plans to "defend" the Internet? Of course not. However, it is foolish to dismiss the threat because one does not agree with a government-proposed "solution." https://taosecurity.blogspot.com/2010/07/cyberwar-is-real.html

Commentary Ultimately cyberwar (as it is now called, I wager) is a semantic argument. If you define war as an event that requires physically killing another person, then only when digital attacks render kinetic effects is cyberwar in play. In this post I was arguing for a larger definition, more in line with what Russian and Chinese theorists follow. There will be more about that in the rest of these volumes.

Over Time, Intruders Improvise, Adapt, Overcome Tuesday, September 18, 2012 Today I read a well-meaning question on a mailing list asking for help with the following statement: "Unpatched systems represent the number one method of system compromise." This is a common statement and I'm sure many of you can find various reports that claim to corroborate this sentiment. I'm not going to argue that point. Why am I still aggravated by this statement then? This sentiment reflects static thinking. It ignores activity over time. For both opportunistic and targeted threats, when exploiting unpatched vulnerabilities no longer works, over time they will escalate to attacks that do work. I recognize that if you have to start your security program somewhere, addressing vulnerabilities is a good idea. I get that as a Chief Security Officer. However, the tendency for far too many involved with security, from the CTO or CIO perspective, is to then conclude that "patched = secure." At best, patching reduces a certain amount of noise because it deflects opportunistic attacks that work against weaker peers. Should patching become more widespread, opportunistic attackers adopt 0-days. We've been seeing that in spades over the last few months, even without widespread adoption of patches. In the case of targeted attacks, patching drives intruders to try other

means of exploitation. I've seen this first hand, with intruders adopting 0-days as a matter of course or trying other attack vectors. Targeted intruders learn not to trip traditional defenses while failing to exploit well-known vulnerabilities. If someone asks you if "unpatched systems represent the number one method of system compromise," please keep this post in mind. Remember we face an intelligent adversary who, over time, acts to improvise, adapt and overcome. We must do the same, over time. https://taosecurity.blogspot.com/2012/09/over-time-intruders-improviseadapt.html Commentary Static thinking is prevalent in many areas of life. It’s no surprise that we find it in the digital realm as well. Whenever evaluating a course of action, it’s prudent to consider “now what?” when weighing various options. Time continues regardless of what we may think or want.

Redefining Breach Recovery Saturday, June 13, 2015 For too long, the definition of "breach recovery" has focused on returning information systems to a trustworthy state. The purpose of an incident response operation was to scope the extent of a compromise, remove the intruder if still present, and return the business information systems to prebreach status. This is completely acceptable from the point of view of the computing architecture. During the last ten years we have witnessed an evolution in thinking about the likelihood of breaches. When I published my first book in 2004, critics complained that my "assumption of breach" paradigm was defeatist and unrealistic. "Of course you could keep intruders out of the network, if you combined the right controls and technology," they claimed. A decade of massive breaches have demonstrated that preventing all intrusions is impossible, given the right combination of adversary skill and persistence, and lack of proper defensive strategy and operations. We need to now move beyond the arena of breach recovery as a technical and computing problem. Every organization needs to think about how to recover the interests of its constituents, should the organization lose their data to an adversary. Data custodians need to change their business practices such that breaches are survivable from the perspective of the constituent. (By constituent I mean customers, employees, partners, vendors -- anyone dependent upon the practices of the data custodian.) Compare the following scenarios. If an intruder compromises your credit card, it is fairly painless for a consumer to recover. There is a $50 or less financial penalty. The bank or credit card company handles replacing the card. Credit monitoring and related services are generally adequate for limiting damage. Your new credit card is as functional as the old credit card.

If an intruder compromises your Social Security number, recovery may not be possible. The financial penalties are unbounded. There is no way to replace a stolen SSN. Credit monitoring and related services can only alert citizens to derivative misuse, and the victim must do most of the work to recover -- if possible. The citizen is at risk wherever other data custodians rely on SSNs for authentication purposes. This SSN situation, and others, must change. All organizations who act as data custodians must evaluate the data in their control, and work to improve the breach recovery status for their constituents. For SSNs, this means eliminating their secrecy as a means of authentication. This will be a massive undertaking, but it is necessary. It's time to redefine what it means to recover from a breach, and put constituent benefit at the heart of the matter, where it belongs. https://taosecurity.blogspot.com/2015/06/redefining-breach-recovery.html Commentary Some people think I’m trying to let organizations off the hook when I blame criminals for intrusions. The fact is that, on the civil side, I believe we can also hold data custodians negligent, which means they share responsibility for intrusions in a non-criminal context. This post talks about defining breach recovery in terms of those most affected, such as the innocent parties whose data and information was compromised. They tend to lack any voice, aside from empty promises of “credit monitoring.”

Forcing the Adversary to Pursue Insider Theft Saturday, February 09, 2019 Jack Crook pointed me toward a story by Christopher Burgess about intellectual property theft by "Hongjin Tan, a 35 year old Chinese national and U.S. legal permanent resident... [who] was arrested on December 20 and charged with theft of trade secrets. Tan is alleged to have stolen the trade secrets from his employer, a U.S. petroleum company," according to the criminal complaint filed by the US DoJ. Tan's former employer and the FBI allege that Tan "downloaded restricted files to a personal thumb drive." I could not tell from the complaint if Tan downloaded the files at work or at home, but the thumb drive ended up at Tan's home. His employer asked Tan to bring it to their office, which Tan did. However, he had deleted all the files from the drive. Tan's employer recovered the files using commercially available forensic software. This incident, by definition, involves an "insider threat." Tan was an employee who appears to have copied information that was outside the scope of his work responsibilities, resigned from his employer, and was planning to return to China to work for a competitor, having delivered his former employer's intellectual property. When I started GE-CIRT in 2008 (officially "initial operating capability" on 1 January 2009), one of the strategies we pursued involved insider threats. I've written about insiders on this blog before but I couldn't find a description of the strategy we implemented via GE-CIRT. We sought to make digital intrusions more expensive than physical intrusions. In other words, we wanted to make it easier for the adversary to accomplish his mission using insiders. We wanted to make it more difficult

for the adversary to accomplish his mission using our network. In a cynical sense, this makes security someone else's problem. Suddenly the physical security team is dealing with the worst of the worst! This is a win for everyone, however. Consider the many advantages the physical security team has over the digital security team. The physical security team can work with human resources during the hiring process. HR can run background checks and identify suspicious job applicants prior to granting employment and access. Employees are far more exposed than remote intruders. Employees, even under cover, expose their appearance, likely residence, and personalities to the company and its workers. Employees can be subject to far more intensive monitoring than remote intruders. Employee endpoints can be instrumented. Employee workspaces are instrumented via access cards, cameras at entry and exit points, and other measures. Employers can cooperate with law enforcement to investigate and prosecute employees. They can control and deter theft and other activities. In brief, insider theft, like all "close access" activities, is incredibly risky for the adversary. It is a win for everyone when the adversary must resort to using insiders to accomplish their mission. Digital and physical security must cooperate to leverage these advantages, while collaborating with human resources, legal, information technology, and business lines to wring the maximum results from this advantage. https://taosecurity.blogspot.com/2019/02/forcing-adversary-to-pursueinsider.html Commentary Forcing the adversary to perform close access operations as a true insider threat is one of the outcomes that one could hope to see, depending on the defensive strategies prosecuted by a high-performing security organization. It

is not a common mechanism but it is an option for those equipped to think and act in this manner.

Know Your Limitations Wednesday, May 29, 2019 At the end of the 1973 Clint Eastwood movie Magnum Force, after Dirty Harry watches his corrupt police captain explode in a car, he says "a man's got to know his limitations." I thought of this quote today as the debate rages about compromising municipalities and other information technology-constrained yet personal information-rich organizations. Several years ago I wrote If You Can't Protect It, Don't Collect It. I argued that if you are unable to defend personal information, then you should not gather and store it. In a similar spirit, here I argue that if you are unable to securely operate information technology that matters, then you should not be supporting that IT. You should outsource it to a trustworthy cloud provider, and concentrate on managing secure access to those services. If you cannot outsource it, and you remain incapable of defending it natively, then you should integrate a capable managed security provider. It's clear to me that a large portion of those running PI-processing IT are simply not capable of doing so in a secure manner, and they do not bear the full cost of PI breaches. They have too many assets, with too many vulnerabilities, and are targeted by too many threat actors. These organizations lack sufficient people, processes, and technologies to mitigate the risk.

They have successes, but they are generally due to the heroics of individual IT and security professionals, who often feel out-gunned by their adversaries. If you can't patch a two-year-old vulnerability prior to exploitation, or detect an intrusion and respond to the adversary before he completes his mission, then you are demonstrating that you need to change your entire approach to information technology. The security industry seems to think that throwing more people at the problem is the answer, yet year after year we read about several million job openings that remain unfilled. This is a sign that we need to change the way we are doing business. The fact is that those organizations that cannot defend themselves need to recognize their limitations and change their game. I recognize that outsourcing is not a panacea. Note that I emphasized "IT" in my recommendation. I do not see how one could outsource the critical technology running on-premise in the industrial control system (ICS) world, for example. Those operations may need to rely more on outsourced security providers, if they cannot sufficiently detect and respond to intrusions using in-house capabilities. Remember that the vast majority of organizations do not exist to run IT. They run IT to support their lines of business. Many older organizations have indeed been migrating legacy applications to the cloud, and most new organizations are cloud-native. These are hopeful signs, as the older organizations could potentially "age-out" over time. This puts a burden on the cloud providers, who fall into the "managed service provider" category that I wrote about in my recent Corelight blog. However, the more trustworthy providers have the people, processes, and technology in place to handle their responsibilities in a more secure way than many organizations who are struggling with on-premise legacy IT. Everyone's got to know their limitations. https://taosecurity.blogspot.com/2019/05/know-your-limitations.html

Commentary The rise of cloud computing has been a benefit to organizations of all shapes and sizes, but particularly those too small to staff competent information technology and security teams. Most organizations want to provide whatever good or service they were originally started to provide, and not worry about hardware and software. I estimate we will see at least one more generation of organizations doing far too much in-house computing before we see the weaker organizations relinquish their IT to specialized providers, just as happened with the electricity production industry.

Seven Security Strategies, Summarized Wednesday, November 06, 2019 This is the sort of story that starts as a comment on Twitter, then becomes a blog post when I realize I can't fit all the ideas into one or two Tweets. (You know how much I hate Tweet threads, and how I encourage everyone to capture deep thoughts in blog posts!) In the interest of capturing the thought, and not in the interest of thinking too deeply or comprehensively (at least right now), I offer seven security strategies, summarized. When I mention the risk equation, I'm talking about the idea that one can conceptually image the risk of some negative event using this "formula": Risk (of something) is the product of some measurements of Vulnerability X Threat X Asset Value, or R = V x T x A. Denial and/or ignorance. This strategy assumes the risk due to loss is low, because those managing the risk assume that one or more of the elements of the risk equation are zero or almost zero, or they are apathetic to the cost. Loss acceptance. This strategy may assume the risk due to loss is low, or more likely those managing the risk assume that the cost of risk realization is low. In other words, incidents will occur, but the cost of the incident is acceptable to the organization. Loss transferal. This strategy may also assume the risk due to loss is low, but in contrast with risk acceptance, the organization believes it can buy an insurance policy which will cover the cost of an incident, and the cost of the policy is cheaper than alternative strategies. Vulnerability elimination. This strategy focuses on driving the vulnerability element of the risk equation to zero or almost zero, through secure coding, proper configuration, patching, and similar methods.

Threat elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through deterrence, dissuasion, cooption, bribery, conversion, incarceration, incapacitation, or other methods that change the intent and/or capabilities of threat actors. Asset value elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through minimizing data or resources that might be valued by adversaries. Interdiction. This is a hybrid strategy which welcomes contributions from vulnerability elimination, primarily, but is open to assistance from loss transferal, threat elimination, and asset value elimination. Interdiction assumes that prevention eventually fails, but that security teams can detect and respond to incidents post-compromise and pre-breach. In other words, some classes of intruders will indeed compromise an organization, but it is possible to detect and respond to the attack before the adversary completes his mission. As you might expect, I am most closely associated with the interdiction strategy. I believe the denial and/or ignorance and loss acceptance strategies are irresponsible. I believe the loss transferal strategy continues to gain momentum with the growth of cybersecurity breach insurance policies. I believe the vulnerability elimination strategy is important but ultimately, on its own, ineffective and historically shown to be impossible. When used in concert with other strategies, it is absolutely helpful. I believe the threat elimination strategy is generally beyond the scope of private organizations. As the state retains the monopoly on the use of force, usually only law enforcement, military, and sometimes intelligence agencies can truly eliminate or mitigate threats. (Threats are not vulnerabilities.) I believe asset value elimination is powerful but has not gained the ground I would like to see. This is my "If you can’t protect it, don’t collect it"

message. The limitation here is obviously one's raw computing elements. If one were to magically strip down every computing asset into basic operating systems on hardware or cloud infrastructure, the fact that those assets exist and are networked means that any adversary can abuse them for mining cryptocurrencies, or as infrastructure for intrusions, or for any other uses of raw computing power. Please notice that none of the strategies listed tools, techniques, tactics, or operations. Those are important but below the level of strategy in the conflict hierarchy. I may have more to say on this in the future. https://taosecurity.blogspot.com/2019/11/seven-security-strategiessummarized.html Commentary If you’d like to know more about this topic, please feel free to listen to my webinars on YouTube.

Conclusion There were no shortages of opinions in this chapter, although I clearly took some time off from espousing philosophy and strategy from 2015 until 2018. This was my security burnout period, a time which I discuss in a post republished later in this volume. I will also discuss specific areas of my philosophy in later chapters, such as those devoted specifically to network security monitoring.

Chapter 3. Risk

Introduction I’ve had a “use it - hate it” relationship with the concept of “risk,” especially because, as I’ve come to appreciate, the word “risk” should never be used in isolation. To speak of “risk” alone is to say nothing. To be useful, we must think and speak in terms of the risk “of some negative consequence.” These posts will explain why and how.

The Dynamic Duo Discuss Digital Risk Monday, October 27, 2003 I've been reading books and looking at product literature which discuss "security," "risk," "threat," and "vulnerability," each with a different definition. I don't think these terms are difficult to understand. I wrote the hopefully amusing vignette below to communicate my understanding of these terms. At least it won't bore you! Meanwhile, at the Hall of Justice... BATMAN: Robin, why the puzzled look? ROBIN: Sorry, Batman. B: Are my Bat Ears crooked again? R: No Batman. I've been reading some books and vendor marketing literature on security, and I'm confused by their definitions of risk, vulnerability, and threat. B: Oh, you've been researching to protect the Hall of Justice computer? Good for you. Tell me why you're confused. R: I see so many people calling "vulnerabilities" and "threats" the same thing. B: That's certainly not right. A vulnerability is a weakness in an asset which could lead to exploitation. A threat is a party with the capabilities and intentions to exploit a vulnerability in an asset. R: Huh? B: Let's try a few examples. Consider Superman. R: I do, often.

B: I don't want to hear about that. Superman is an asset to the Hall of Justice, true? R: He's definitely an asset. B: I bet you think so. Think of Superman as an asset of the Hall of Justice's crime fighting arsenal. What is his weakness? R: Kryptonite? B: Close. Superman's weakness -- his vulnerability -- is the fact that Kryptonite nullifies the effect of the Earth's yellow sun, removing his super powers. R: So what is Kryptonite? B: Kryptonite is a weapon, or tool. But on its own it's nothing -- unless used by an evil party. R: Like Lex Luthor? B: Exactly. Lex Luthor is a threat, but only if he's carrying Kryptonite. R: Lex Luthor is the threat, because his intentions are to harm Superman and his capability is instantiated by possession of Kryptonite. How does risk fit into this? B: Let's define risk. Risk is the possibility of suffering harm or loss. It's a measure of danger. The loss of Superman would deal a crushing blow to the Hall of Justice's ability to fight crime. R: That means we're talking about the risk of loss of Superman's crime fighting abilities, or more generally the loss of Superman. I don't know how to express that formally. B: Let me help. Risk is the product of multiplying measurements of threat by vulnerability by cost of replacing an asset, also called that asset's value. So, R = T x V x C.

R: You did say risk was a measurement of the probability of loss. I don't know what the numbers should be for any of those factors. B: It's ok to assign arbitrary values, say 1 to 5 for each factor, as long as you use the same scale when measuring different risks. How would you assess the risk to the Hall of Justice now? R: I would assign a Kryptonite-equipped Luthor as threat 4, with Superman's vulnerability as 4, and cost as 5, for a total of 80. B: Why didn't you assign the threat and vulnerability to each be 5? A Kryptonite-equipped Luthor has capabilities and intentions, and Superman's weakness can kill him. R: I assessed the threat as 4 because I know Luthor has Kryptonite, but I don't know if he has enough to kill Superman. B: That is prudent. His capability to exploit Superman could be diminished. You're factoring in uncertainty. How about the vulnerability rating? R: Superman isn't completely vulnerable, since we fellow Super Friends would protect him if Lex appeared. B: So you mean we Super Friends could be considered countermeasures to Superman's vulnerability? R: Yes! Is that why the risk equation doesn't explicitly mention countermeasures? B: You catch on quickly Robin. Although countermeasures could be included in the risk equation, they complicate the issue mathematically. Better to decrease the vulnerability rating if the countermeasure effectively mitigates the asset's weakness. R: Batman, I'm starting to understand. What is security then? B: Security is the process of maintaining an acceptable level of perceived risk.

R: That seems awfully specific. B: Let me explain with another example. You know Fort Knox? And the gold it protects? R: Of course. Gold is the asset protected by Fort Knox. B: Let's assess the risk of theft of Fort Knox's gold. Risk is the probability of loss, remember? Assume that Fort Knox is so well protected, it has no vulnerabilities capable of exploitation by any human, Super Friend, or Legion of Doom member. Only a force of nature could damage Fort Knox, like a meteorite from space wiping out Kansas. R: Holy invincibility, Batman! Let me see... I'd say the threat is low, maybe a 1, since there are evil parties with intentions to steal Fort Knox's gold. Since Fort Knox is invulnerable to anything but a force of nature, no party has the capability to harm it. I'd assess the vulnerability as 1, since Fort Knox could still be wiped out by that meteorite from space. The cost of replacement is immense -- definitely 5. That gives is 1 x 1 x 5 = 5. That means... B: That's right Robin. The risk of the loss of Fort Knox's gold is 5, a very small number. R: So Fort Knox's gold is secure? B: It's almost perfectly secure, especially compared to Superman as a Hall of Justice asset. Let's change the equation. Do you know of the Marvel universe? R: The what? B: It's the source of better movies than our own DC universe. Anyway, in the Marvel universe, a creature called the Hulk exists. R: Tell me about this beast. B: For the purposes of this argument, believe that the Hulk could smash

his way into Fort Knox if he so chose. R: Is the Hulk evil? Does he covet gold? B: No, he's a powerful but misunderstood creature. Do you know what you just did? R: Let me guess -- I performed a threat analysis? B: Excellent Robin. Your shorts aren't too tight after all. Now, on to the next step -- risk analysis. R: Given the presence of the Hulk, I would assess the threat as a 4, the vulnerability as a 2, and the cost as a 5. B: Why did you raise the threat level? I told you the Hulk wouldn't harm Fort Knox. R: Maybe the Legion of Doom could trick the Hulk into breaching Fort Knox? Then the Hulk would have the capabilities and intentions to exploit the Fort. B: Very good. R: And I rated the vulnerability as a 2 and not higher, as even a creature like the Hulk would have a tough time powering his way through all that concrete and steel, surely? B: True enough. You're getting the hang of this, Robin. R: Thanks Batman. You're swell. Can I try this sort of analysis using the Hall of Justice computer? B: You bet. We run OpenBSD on the Hall of Justice machine. Do you know if it has any vulnerabilities? R: Well, I haven't updated OpenSSH yet, so there is a vulnerability. That's a 5. Let me do a threat analysis next. I would identify the threat as the Legion of Doom. Specifically, I bet Brainiac could code a tool that would

exploit the vulnerable OpenSSH daemon. B: That means the Legion of Doom has the capabilities and intentions to harm the Hall of Justice computer. We call that a "current credible threat." R: I'd rate the threat a 4, since we aren't 100% sure the Legion of Doom has an exploit. They are definitely capable of writing it though. That leaves the cost of replacement, which I would assess as a 5. The Hall of Justice computer is a piece of critical infrastructure. The risk of loss of the Hall of Justice Computer is 4 x 5 x 5 = 100. That's immense! B: Get to patching, Robin. R: How can we reduce risk, Batman? B: We can't reduce risk directly. We can only affect each of the factors. For the threat component, we could eliminate the party completely. Alternatively, we could try to change their intentions by addressing why they hate us. We could also remove their capability to harm us, such as removing their financing or destroying their weapons. R: That sounds like a way to deal with terrorists. B: Perhaps. On the vulnerability side, you could patch the weakness directly. You could implement access control or other counter-measures to limit the ability of intruders to exploit the vulnerability. All of these factors decrease the vulnerability rating. R: You're so smart Batman. B: Thank you. On the cost side, we could completely replicate the Hall of Justice computer and host it off-site. While exploitation of the Hall of Justice computer would still be devastating, by implementing redundancy we could lessen the cost of replacing a damaged Hall of Justice computer. R: Thanks Batman. You've really helped me understand risk! B: You're welcome Robin. I hear the Bat Phone ringing -- to the Bat Poles!

Note: Multiplying numbers together, without any measurement or rank, isn't exactly the "science" one would like to see in risk assessment. The purpose of this exercise is to discuss definitions and show how breaking out individual components of risk (i.e., threat, vulnerability, and asset cost) helps us think about the problem. This is obviously a naive exercise so I prefer to focus attention on the definitions and their translation into a fictional case study. https://taosecurity.blogspot.com/2003/10/dynamic-duo-discuss-digitalrisk.html Commentary This remains one of my favorite posts, despite the use of numbers assigned to different parts of the risk equation. It’s important because it properly differentiates among terms that are still conflated, confused, and misused -- risk, threat, vulnerability, and the like.

Calculating Security ROI Is a Waste of Time Sunday, April 18, 2004 I was pleased to read Infosec Economics by Lawrence Gordon and Robert Richardson in the 1 Apr 04 issue of Network Computing magazine. This duo says: "ROI (or bang for the buck) can't be applied perfectly to information security because often the return on information security purchases and deployments is intangible. Sure, companies invest in some solutions that offer benefits beyond security--faster network throughput in a new router that supports VPNs, for example--and they can calculate the ROI of these indirect benefits. But security requires factoring in the expectation of loss." I've been lucky to have never been tasked with calculating security's "return on investment," because I would have told my supervisor the answer is zero. There is no return to be made on security, because security is a loss avoidance and loss mitigation measure. Security is a way to deal with risk, which is the probability of loss. (I dealt with these definitions in Oct 04.) "Investing" in security is not like investing in a more efficient metalbending machine or sending an employee to a training class. Donald Trump does not receive any return on the investment he makes in bodyguards. All he does is provide a means to lessen the probability of bodily harm. He is not a more efficient businessman as a result of having bodyguards. Obviously people value security, but it must be balanced by the threats one faces and the consequences of loss. Presidential candidates only receive Secret Service protection once they appear to be their party's nominee. Private citizens do not usually employ bodyguards. We make the decisions all the time but because digital security is an art with opaque threats, we have trouble choosing the appropriate level of security for our networks. Those who perform network security monitoring are more aware of these threats

than the average CISO. NSM operators possess network awareness, thanks to the sorts of information they collect. Economists have appreciated this fact for years. It looks like the 2004 CSI/FBI study will avoid ROI in favor of discussing net present value (NPV) and security as an externality. Stay tuned. https://taosecurity.blogspot.com/2004/04/calculating-security-roi-iswaste-of.html Commentary This appears to be my first post talking about the fact that there is no ROI for security, unless selling security is your business. I was amused to see a mention of Donald Trump, and then Presidential candidates, in a post from 2004!

Ripping Into ROI Friday, December 17, 2004 In April I wrote Calculating Security ROI Is a Waste of Time. The latest print issue of Information Security magazine features a story by Anne Saita that confirms my judgement: "If you find executives resisting your security suggestions, try simply removing the term 'ROI' from the conversation. 'ROI is no longer effective terminology to use in most security justifications,' says Paul Proctor, Vp of security and risk strategies for META Group. [Paul is also author of the excellent book Practical Intrusion Detection, where he correctly said 'there is no such thing as a false positive.'] Executives, he says, interpret ROI as 'quantifiable financial return following investment.' Security professionals view it more like an insurance premium. The C-suite is also wary of the numbers security ROI calculators crunch. 'Bottom line is that most executives are frustrated and no longer interested in hearing this type of justification,' Proctor says. Instead, express a technology's or program's business value, cost/benefit analysis and risk assessment." Amen. https://taosecurity.blogspot.com/2004/12/ripping-into-roi-in-april-iwrote.html Commentary Still Amen in 2020.

SANS Confuses Threats with Vulnerabilities Wednesday, January 26, 2005 In late 2003 I published Dynamic Duo Discuss Digital Risk. This was my light-hearted attempt to reinforce the distinction between a threat and a vulnerability. Specifically, a threat is a party with the capabilities and intentions to exploit a vulnerability in an asset. A vulnerability is a weakness in an asset that could lead to exploitation. An intruder (the threat) exploits a hole (the vulnerability) in Microsoft IIS to gain remote control of a Web server. In other words, threats exploit vulnerabilities. This is a simple concept, yet it is frequently confused by security prophets like Bruce Schneier in Beyond Fear. Now SANS is making the same mistake in the latest Incident Handler's Diary. In a posting to announce work on the upcoming SANS Top 20 List, the Diary calls the new report the "SANS CRITICAL INTERNET THREATS 2005" and says: "SANS Critical Internet Threats research is undertaken annually and provides the basis for the SANS 'Top 20' report. The 'Top 20' report describes the most serious internet security threats in detail, and provides the steps to identify and mitigate these threats." So, are we going to read a ranking of identified Romanian intruders, followed by Russian organized crime, Filipino virus writers, and then Zimbabwean foreign intelligence services? Will mitigation include prosecution, incarceration, and the like? Probably not, as the announcement continues: "The current 'Top 20' is broken into two complimentary [sic] yet distinct sections: - The 10 most critical vulnerabilities for Windows systems. - The 10 most critical vulnerabilities for UNIX and Linux systems." So now we're talking about vulnerabilities. That's what last year's

"Twenty Most Critical Internet Security Vulnerabilities" addressed. The announcement concludes: "The 2005 Top 20 will once again create the experts' consensus on threats - the result of a process that brings together security experts, leaders, researchers and visionaries... In addition to the Windows and UNIX vulnerabilities, this year's research will also focus on the 10 most severe vulnerabilities in the Cisco platforms." I sincerely hope at least one expert will clue in the announcement-writer concerning the difference between a threat and a vulnerability. Words matter! Update: While doing some research I found a 1999 report by the Navy's Center on Terrorism and Irregular Warfare called Cyberterror: Prospects and Implications. It says in footnote 11: "Vulnerability is not synonymous with threat. A vulnerability is a weakness in a system that may be exploited. A threat requires an actor with the motivation, resources, and intent to exploit a vulnerability." https://taosecurity.blogspot.com/2005/01/sans-confuses-threats-with.html Commentary This isn’t complicated. I assume writers who make these simple mistakes think they are doing the reader a service by using a variety of words, as recommended by high school English teachers waving a copy of the Thesaurus. I believe the concept was best communicated in this slightly altered exchange in the original Kung Fu TV series, in the episode “An Eye for an Eye.” An elderly former Confederate soldier asks Caine “"If I don't have a right to mix the terms vulnerability and threat, who does?" Caine replies: “No one.” Ok, that’s not really what happened, but it makes my point. (The original

was talking about revenge.)

Risk, Threat, and Vulnerability 101 Thursday, May 05, 2005 In my last entry I took some heat from an anonymous poster who seems to think I invent definitions of security terms. I thought it might be helpful to reference discussions of terms like risk, threat, and vulnerability in various documents readers would recognize. Let's start with NIST publication SP 800-30: Risk Management Guide for Information Technology Systems. In the text we read: "Risk is a function of the likelihood of a given threat-source's exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization. To determine the likelihood of a future adverse event, threats to an IT system must be analyzed in conjunction with the potential vulnerabilities and the controls in place for the IT system." The document outlines common threats: Natural Threats: Floods, earthquakes, tornadoes, avalanches, electrical storms, and other such events.

landslides,

Human Threats Events that are either enabled by or caused by human beings, such as unintentional acts (inadvertent data entry) or deliberate actions (network based attacks, malicious software upload, unauthorized access to confidential information). Environmental Threats: Long-term power failure, pollution, chemicals, liquid leakage. I see no mention of software weaknesses or coding problems there. So how does NIST define a vulnerability? "Vulnerability: A flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised

(accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system's security policy." The NIST pub's threat-vulnerability pairings table makes the difference between the two terms very clear: SP 800-30 talks about how to perform a risk assessment. Part of the process is threat identification and vulnerability identification. Sources of threat data include "history of system attack, data from intelligence agencies, NIPC, OIG, FedCIRC, and mass media," while sources of vulnerability data are "reports from prior risk assessments, any audit comments, security requirements, and security test results." The end of SP 800-30 provides a glossary: Threat: The potential for a threat-source to exercise (accidentally trigger or intentionally exploit) a specific vulnerability. Threat-source: Either (1) intent and method targeted at the intentional exploitation of a vulnerability or (2) a situation and method that may accidentally trigger a vulnerability. Threat Analysis: The examination of threat-sources against system vulnerabilities to determine the threats for a particular system in a particular operational environment. Vulnerability: A flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the system's security policy. For those of you Microsoft-only shops, consider their take on the problem in the The Security Risk Management Guide. Chapter 1 offers these definitions: Risk: The combination of the probability of an event and its consequence. (ISO Guide 73)

Risk management: The process of determining an acceptable level of risk, assessing the current level of risk, taking steps to reduce risk to the acceptable level, and maintaining that level of risk. Threat: A potential cause of an unwanted impact to a system or organization. (ISO 13335-1) Vulnerability: Any weakness, administrative process, or act or physical exposure that makes an information asset susceptible to exploit by a threat. Microsoft then offers separate appendices with common threats and vulnerabilities. Their threats include catastrophic incidents, mechanical failures, malicious persons, and non-malicious persons, all with examples. Microsoft's vulnerabilities include physical, natural, hardware, software, media, communications, and human. Microsoft clearly delineates between threats and vulnerabilities by breaking out these two concepts. I'd like to add that the comment on my earlier posting said I should look up "threat" at dictionary.com. I'd rather not think that "security professionals" use a dictionary as the source of their "professional" understanding of their terms. Still, I'll debate on those grounds. The poster wrote that dictionary.com delivers "something that is a source of danger" as its definition. Here is what that site actually says: An expression of an intention to inflict pain, injury, evil, or punishment. An indication of impending danger or harm. One that is regarded as a possible danger; a menace. Remember what we are debating here. I am concerned that so-called "security professionals" are mixing and matching the terms "threat" and "vulnerability" and "risk" to suit their fancy. Here's vulnerability, or actually "vulnerable":

Susceptible to physical or emotional injury. Susceptible to attack: “We are vulnerable both by water and land, without either fleet or army” (Alexander Hamilton). Open to censure or criticism; assailable. Liable to succumb, as to persuasion or temptation. You'll see both words are nouns. But -- a threat is a party, an actor, and a vulnerability is a condition, a weakness. Threats exploit vulnerabilities. Finally, risk: The possibility of suffering harm or loss; danger. Risk is also a noun, but it is a measure of possibility. These are three distinct terms. It is not my problem that I define them properly, in accordance with others who think clearly! I am not inventing any new terms. I'm using them correctly. I'd like to thank Gunnar Peterson for reminding me of the NIST and Microsoft docs. https://taosecurity.blogspot.com/2005/05/risk-threat-and-vulnerability101-in.html Commentary TL;DR: I don’t make up the definitions I use. I don’t think cyber security is that special, so it’s logical to apply terms that humans have used for hundreds of years to similar concepts.

Cool Site Unfortunately Miscategorizes Threats Friday, July 08, 2005 While chatting with Aaron Higbee of the SecureMe Blog yesterday, he mentioned a cool new site: Threats and Countermeasures. A majority of the contributors are Foundstone consultants and parent company McAfee is paying the bills. Anyone who's been reading my blog for a while knows of my linguistic crusade involving words in the standard risk equation, with risk being a product of threat, vulnerability, and asset value. (See Risk, Threat, and Vulnerability 101, OCTAVE Properly Distinguishes Between Threats and Vulnerabilities, SANS Confuses Threats with Vulnerabilities, and The Dynamic Duo Discuss Digital Risk.) How does the Threats and Countermeasures site match proper definitions? I don't see the word threat being used correctly here. "Default network appliance passwords" aren't threats; those are vulnerabilities. "Running unnecessary services" is a vulnerability, as is "weak security around scripting extensions." Perusing T&C, I don't see the term threat used properly. Most of the content described as "threats" are really attacks. The Cross Site Scripting page is a good example. All of the content listed under "Threats" are attacks or exploits. The content under "Attacks" appear to be specific examples of the material listed under "Threats". So what is going on here? Obviously the guys who put together Threats and Countermeasures are security experts. Besides their knowledge base, the site offers an impressive collection of blogs that I recommend reading. I think part of the problem is the warped view of threats promulgated by T&C owner Foundstone. It all began with the announcement of their so-

called Threat Correlation Module for the Foundstone "Enterprise Risk Solution" suite. Back in late 2003 when this announcement was made (and I was working for Foundstone), marketing folks realized the terms "vulnerability" and "vulnerability management" were no longer a way to differentiate a company in the market. Vulnerability management was becoming commoditized, so companies began pushing the terms "risk" (e.g., "Enterprise Risk Solution") and "threat." I was initially interested in being part of Foundstone's new Threat Intelligence team, supporting the Threat Correlation Module. I thought this would be a cool opportunity to deploy honeynets, interact with the "underground," and collect intelligence on the parties that conduct attacks. Instead I was told I would monitor disclosure sites -- BugTraq and the like -and populate Foundstone's database with that information. At one point I was told that a "hole in OpenSSH" is a "threat," when clearly that is a vulnerability. Shortly after I realized Foundstone's view of "threat" was a new way to market vulnerability data, I left the company. This is not to say that Foundstone's product is bad. On the contrary, I think it is very powerful. The idea of correlating new vulnerability information against a database of enterprise assets, and measuring the risk to an organization, is excellent. It's just too bad the product and concept are misnamed. While it is difficult to misuse the term risk (risk being defined as the probability of suffering harm or loss), it is too easy to misuse "threat." As a reminder, a vulnerability is a weakness in an asset which could lead to exploitation. A threat is a party with the capabilities and intentions to exploit a vulnerability in an asset. With few exceptions, no security vendors deal with threats. There are only two ways to gather information on threats: passive interaction or active interaction. Passive interaction means watching threats as they conduct reconnaissance, exploit targets, and pillage assets. Active interaction means communicating with the threats themselves, through email, voice, and other means. Two organizations I know that deal with threats in an unclassified

environment include The Honeynet Project and iDEFENSE. The former mainly learns about threats by watching them compromise honeynets, while the latter pursues and communicates with threats. Managed security monitoring providers who look for more than worms can also be considered threat-aware; examples include NetSec and LURHQ. I guess the "threat" concept is just too sexy for most security vendors to avoid. Even people who should know better, like Bruce Schneier, misuse the terms threat and vulnerability. (See my review of Beyond Fear; it's the second on that page.) Although I will probably be seen as stepping on the toes of smart security people, I will not stop pointing out when those important terms are misused. https://taosecurity.blogspot.com/2005/07/cool-site-unfortunatelymiscategorizes.html Commentary I believe this explains one reason why we have such sloppy terminology in security: the need for marketing teams to differentiate their companies. Back in late 2003 when this announcement was made (and I was working for Foundstone), marketing folks realized the terms "vulnerability" and "vulnerability management" were no longer a way to differentiate a company in the market. Vulnerability management was becoming commoditized, so companies began pushing the terms "risk" (e.g., "Enterprise Risk Solution") and "threat." I found the relevant text from my review of Bruce Schneier’s book Beyond Fear as well, and will conclude the commentary for this post with an excerpt from it: Beyond Fear is a good book, but don't turn to it for proper definitions of security terms. Steer clear of this book's misuse of the words "threat" and "risk..." Schneier introduces the term "threat" on p. 20 with this example: "Most people don't give any thought to securing their lunch in the

company refrigerator. Even though there's a threat of theft, it’s not a significant risk because attacks are rare and the potential loss just isn't a big deal. A rampant lunch thief in the company changes the equation; the threat remains the same, but the risk of theft increases." That's wrong; let's start with definitions (mine, based on intel experience - not the author's). A threat is a party with the capabilities and intentions to exploit a vulnerability in an asset. A vulnerability is a weakness in an asset which could lead to exploitation. Risk is the possibility of suffering harm or loss. It's a measure of danger. All of these terms were defined years ago by military intel and law enforcement types, especially those doing counter-terrorism. In the lunchroom example, nobody initially "secures" their lunch, even though their "assets" are held in a "vulnerable" (unlocked, unguarded) refrigerator. Why? There's no "threat" -- people have the capability to steal lunches but nobody has evil intentions. "Risk" of losing one's lunch is close to zero. Now, add the "rampant lunch thief." The threat is NOT "the same"; a threat now exists for the first time. The risk equation changes -- risk of loss is much higher. (Countermeasures like a guard can reduce the vulnerability and bring risk of loss closer to the original low level.) Another example of fuzzy thinking appears on p. 50. "Just because your home hasn't been broken into in decades doesn't mean that it's secure." Says who? If the threat the entire time was zero, the house was always perfectly secure. Vulnerabilities are but one part of the risk equation, which is Risk = Threat X Vulnerability X Cost of Asset. If any factor is zero, risk is zero. One quick final example appears on p. 238:

"The problem lies in the fact that the threat -- the potential damage -is enormous." Wrong! A threat is an agent, or party, who wants to and can inflict damage. "Threat" in this sentence should be "cost," meaning the replacement value of the assets at risk. A hint to the source of these errors appears on p. 82: "examining an asset and trying to imagine all the possible threats against that asset is sometimes called 'threat analysis' or 'risk analysis.' (The terms are not well defined in the security business, and they tend to be used interchangeably.)" Which security business? Counter-terrorism and intel folks know threat analysis is performed against groups with capabilities and intentions to harm American assets. Risk analysis calculates the potential for loss given a certain threat, an asset's vulnerabilities, and the value of that asset. It's the digital security community that's obscuring the definitions... Yes, this is semantics, but shouldn't a book by an expert set the record straight?

BBC News Understands Risk Thursday, August 25, 2005 This evening I watched a story on BBC News about the problem of bird flu. Here is the story broken down in proper risk assessment language. Two assets are at risk: human health and bird health. We'll concentrate on birds in this analysis. Healthy birds are the asset we wish to protect. The threat is wild migratory birds infected by bird flu. The threat uses an exploit, namely bird flu itself. The vulnerability possessed by the asset and exploited by the threat is lack of immunity to bird flu. A countermeasure to reduce the asset's exposure to the threat is keeping protected birds indoors, away from their wild counterparts. The risk is infection of domesticated birds by wild birds. All infected birds must be killed. The TV story I watched contained this quote by reported Tom Heap: "The lesson learned from foot-and-mouth [disease, which ravaged Europe several years ago] is to do your best to keep the disease out, but assume that will fail. Be ready to tackle any outbreak to prevent an epidemic." Let's replace certain terms with the security counterparts: "The lesson learned from the last time we were compromised is to do your best to keep intruders out, but assume that will fail. Be ready to respond to any intrusion to prevent complete compromise of the organization."

This is the power of using proper terminology. Lessons from other scientific fields can be applied to our own problems, and we avoid reinventing the wheel. https://taosecurity.blogspot.com/2005/08/bbc-news-understands-risk-thisevening.html Commentary I’m working on this book during the Covid19 pandemic. I’ve studiously avoided making comparisons with security, but this post shows how it could be done for bird flu.

Organizations Don't Remediate Threats Thursday, December 01, 2005 I noticed the following in a Qualys press release cited by SC Magazine: "'The Laws of Vulnerabilities research gives security managers and executives clear, statistical information that helps them make better informed decisions,' said Howard A. Schmidt, former cyber security advisor to the President. 'With automated attacks creating 85 percent of their damage within the first fifteen days, it is even more critical that organizations act quickly to identify and remediate threats.'" (emphasis added) Mr. Schmidt is not using the term threat properly here. "Organizations" cannot remediate "threats". The definition of remediate is "set straight or right; 'remedy these deficiencies'". The word "deficiencies" in the sample usage is a direct reference to vulnerabilities. The only way to remediate a threat would be to capture and/or incapacitate the party exploiting an asset. Assuming we can accept this stretch of the term, only law enforcement or the military could act against threats in this manner. Hence, (civilian) organizations don't "remediate threats." https://taosecurity.blogspot.com/2005/12/organizations-dont-remediatethreats-i.html Commentary I should have checked my blood pressure before editing this chapter. I can feel it rising as I encounter these posts. It’s time to try deep breathing exercises.

Return on Security Investment Wednesday, April 26, 2006 Just today I mentioned that there is no such thing as return on security investment (ROSI). I was saying this two years ago. As I was reviewing my notes, I remembered one true case of ROSI: the film Road House. If you've never seen it, you're in for a treat. It's amazing that this masterpiece is only separated by four years from Swayze's other classic, Red Dawn. (Best quote from Red Dawn: A member of an elite paramilitary organization: "Eagle Scouts.") In Road House, Swayze plays a "cooler" -- a bouncer who cleans up unruly bars. He's hired to remove the riff raff from the "Double Deuce," a bar so rough the band is protected by a chicken wire fence! I personally would have hired Jackie Chan, but that's a story for another day. Swayze's character indeed fights his way through a variety of local toughs, in the process allowing classier and richer patrons to frequent the Double Deuce. The owner clearly sees a ROSI; the money he pays Swayze is certainly less than the amount he now receives from a more upscale establishment. Is there a lesson to be drawn for the digital security world? Notice the focus on threats. The Double Deuce owner didn't hire Swayze to build higher walls or cover windows with iron bars. Instead of addressing vulnerabilities, he sought threat removal. This is not a process the average company can implement; usually law enforcement and intelligence agencies have this power. I have heard the term "friendly force presence" being used within certain military circles. This seems to refer to keeping assessment teams on the lookout for indications of the adversary on our networks. This certainly works in the physical world, but it may be difficult to translate into the virtual one. One example: when I visited Ottawa recently, I stopped at a McDonald's to get a quick meal. The place was teeming with teenagers, most of whom

were just lounging around. I considered leaving because the place was so full. I saw a manager appear a few minutes after I arrived, and with him came a uniformed police officer. The officer had a word with one or two of the larger teens and suddenly the restaurant started to empty. Within five minutes hardly anyone was left, and no one under the age of 18. It was amazing. https://taosecurity.blogspot.com/2006/04/return-on-security-investmentjust.html Commentary Just when I was losing faith in my ability to stay calm, I encounter a post about one of my favorite cult movies. Also, note these sentences: “I have heard the term "friendly force presence" being used within certain military circles. This seems to refer to keeping assessment teams on the lookout for indications of the adversary on our networks.” This was an early reference to the “hunter-killer” missions that I was hearing about when visiting areas north of me.

Risk Mitigation Thursday, April 27, 2006 If you've been following the last few days of posts, I've been thinking about security from a more general level. I've been wondering how we can mitigate risks in a digital world where the following features are appearing in nearly every digital device. Think about digital devices in your possession and see if you agree with this characterization of their development. Digital devices are increasingly: Autonomous: This means they act on their own, often without user confirmation. They are self-updating (downloading patches, firmware) and self-configuring (think zeroconf in IPv6). Users could potentially alter this behavior, but probably not without breaking functionality. Powerful: A cell phone is becoming as robust as a laptop. Almost any platform will be able to offer a shell to those who can solicit it . There is no way to prevent this development -- and would we really want to? Ubiquitous: Embedded devices are everywhere. You cannot buy a car without one. I expect my next big home appliance to have network connectivity. Users can't do much about some of these developments. Connected: Everything will be assigned an IPv4 (or soon) an IPv6 address. Distance is seldom a problem. Every digital maniac is a few hops away. Complex: I am scared by the thought of running Windows Mobile on my next phone. Can I avoid it? Probably not. How many lines of code are running on that mini-PC -- I mean "phone" -- I'll be using? In my opinion, this digital world is increasingly resembling the analog one. In fact, those five attributes could describe people as easily as complex machines!

The key factor in this new world will not be static vulnerabilities, but dynamic threats. The number of opportunities for threats to play havoc will vastly dwarf the chances for defenders to address vulnerabilities. Think about how we deal with security in a typical city. I call it the "local police model." Police can never prevent all crimes, although they can try. Police more often respond to crimes. They proceed to track and jail criminals. By prosecuting criminals, the justice system removes threats. No one spends time or money putting bars on windows or replacing door locks in the average suburban neighborhood. Crime still happens, but society survives as long as the level of crime is acceptable. Why did a police model rise? Back in the caveman days, we lived in tribes. If you didn't belong to my tribe, I could beat you back with my club. As societies evolved, communication and ties between tribes prevented this simple model from working. More sophisticated threats with ingenious attacks (e.g., white collar crime) took advantage of these social ties. Guess what -- this is where we are now in the digital world. Once upon a time you might have been able to restrict access based on trusted IPs. Then you had to shut down ports that couldn't be shared. Now we do business with everyone, and I can't be sure that the Microsoft SMB/CIFS that I'm exchanging with a business partner is normal or malicious when I use a standard access control device. A threat-centric approach to security has served the analog world well enough. I think that is the only way to move forward as the digital world becomes as complex as the analog. One more thought: The number of assets continues to rise. The number of

vulnerabilities in those assets continues to rise. The number of threats continues to rise. The ability of security experts to apply countermeasures can not keep pace with this world. Is it time for autonomous agents to work on behalf of "the good guys?" I am beginning to agree with Dave Aitel's idea of nematodes that act on behalf of human agents. It is becoming increasingly difficult for humans to even understand the digital environment. The only real way to know exploitation is not possible is for exploitation to be tried and then found to fail. Nematode agents may roam the network constantly testing intrusion scenarios and reporting their progress. Perhaps next-generation detection devices will monitor nematode activity. When they see another agent that is not a registered nematode exploit a target, that will be the sign that an intrusion has occurred. https://taosecurity.blogspot.com/2006/04/risk-mitigation-if-youvebeen.html Commentary Here is my favorite part: “A threat-centric approach to security has served the analog world well enough. I think that is the only way to move forward as the digital world becomes as complex as the analog.” I agree with this statement, and it explains why we need to be able to properly differentiate between threats and vulnerabilities. If you think you’re taking a “threat-centric approach,” and all you’re doing is mitigating vulnerabilities, you’re not actually doing anything about threats.

Three Threats Monday, May 29, 2006 I thought three examples of threats, with corresponding vulnerabilities, etc., might help convince those who doubt the proper use of these terms. Let's start with a mythical example: Achilles. I'll use Achilles' point of view. Risk: Death of Achilles. Asset: Achilles' life. Vulnerability: Achilles' heel. (Achilles was invulnerable, save the portion of his heel where his mother held while dipping him in the River Styx. This is the most popular version of the myth.) Threat: Paris, who shot Achilles in the heel with an arrow. Exploit: The arrow show by Paris. Let's now look at an example from one of the best movies of all time: The Karate Kid. I'll use Daniel's point of view. Risk: Loss of tournament, thereby letting Johnny Lawrence win. Asset: Daniel LaRusso's fighting ability. Vulnerability: Leg injured in previous fight. Threat: Johnny Lawrence. Exploit: Strike to the injured leg. Man, that was funny. Here is the third example, from Star Wars. (Don't make me quote the episode -- this is geeky enough already.) I'll use the Empire's point of view.

Risk: Loss of the Death Star and Imperial prestige. Asset: The Death Star. Vulnerability: "An analysis of the plans provided by Princess Leia has demonstrated a weakness in the battle station... It's a small thermal exhaust port, right below the main port. The shaft leads directly to the reactor system. A precise hit will start a chain reaction which should destroy the station." Threat: X-Wings, e.g: "[T]he Empire doesn't consider a small oneman fighter to be any threat, or they'd have a tighter defense." (Bravo Lucas!) Exploit: "The shaft is ray-shielded, so you'll have to use proton torpedoes." Getting the hang of it? Try representing the Star Wars example from the Rebellion's point of view. It's fun, really. https://taosecurity.blogspot.com/2006/05/three-threats-i-thought-threeexamples.html Commentary I really tried to communicate the elements of the risk equation in the 2000s.

Security Is Still Loss Avoidance Wednesday, August 16, 2006 One of you (who wishes to remain anonymous) sent me a link to the story Value Made Visible in response to my Real Technology ROI post. Here is the CSO magazine core argument. [The] Value Protection [Metric] is [Bruce] Larson's attempt to overcome security's classic problem of seeming like nothing but a drain on the business... The basic Value Protection metric is a ratio that looks like this: Value Protection = Normal Operations Cost ($) – Event Impact ($) / Normal Operations Cost ($)... Larson's metric just subtracts the cost of security events from the normal cost of doing business, then divides by that same operations cost to get a ratio. I'm sure that's been published somewhere before, or at least something very similar. I'm too lazy to check those CISSP books I never open. Here are some examples from the same article: Whether it's based on actual events or potential futures, the Value Protection ratio gives security officers a real metric to present and it gives executives a simple, clean picture of security investments' relative value. Here are three examples of how it could be used by an organization with a normal operations cost (N) of $1 million: Example 1. A medium-level virus outbreak costs $70,000 across all operations. VP = (1,000,000 – 70,000) / 1,000,000 = 0.93 Larson calls a 0.9 ratio "exceptional." A Value Protection ratio of

0.93 probably doesn't require more investment or lowering of event impact, especially if trying to increase the ratio would take away from investment in other areas where Value Protection isn't as strong. Example 2. An insider fraud attack causes $500,000 in response and recovery costs, lawyers' fees, insurance costs and unrecouped stolen goods. VP = (1,000,000 – 500,000) / 1,000,000 = 0.5 In rare instances where high risk is tolerable, such as a high-level R&D project, protecting half the value of an investment might be acceptable. But in most cases, value protection of 0.5 is "usually pretty bad," Larson says. And that makes sense: It means your security is a 50/50 proposition. Example 3. A network vulnerability leads to customers' personal data being stolen, resulting in $1.2 million in damages from response and recovery, lawyers' fees, government fines and other ancillary costs, as well as a significant drop in stock value after negative publicity. VP = (1,000,000 – 1,200,000) / 1,000,000 = -0.2 Negative ratios are a clear sign that an organization doesn't have the proper information security defenses in place, as it means that security events have or potentially will cost more than operations is spending to stop them. Immediate steps should be taken to fortify the information security controls. Ok, this is all very interesting. However, it doesn't change the fact that security is still loss avoidance. Mr. Larson is not calculating any return on security investment. His American Water company is not any more productive, in the absence of threats, when he spends money on security. When threats are present, security helps American Water serve its customers. American Water can't serve any more customers because of security.

One last excerpt: This "VP" is either being nice or he doesn't understand business very well: "It adds value; we're very supportive of it," says Steve Schmitt, American Water's vice president of operations, of Larson's Value Protection metric. Sorry Mr Schmitt, but your American Water operations create value. Security spending helps avoid loss of that value. This is not to say that I oppose security spending. How could I -- I am a security professional! However, I also recognize that security is like insurance. You cannot buy insurance and as a result have your business be more productive or profitable. https://taosecurity.blogspot.com/2006/08/security-is-still-lossavoidance.html Commentary Salty, but true.

No ROI for Security or Legal Saturday, August 19, 2006 Last night I watched a Dateline NBC story about the fast food industry's defense against lawsuits alleging their products cause obesity. This reminded me that these corporate legal teams are similar to corporate security teams. No one is going to increase funding for their legal department and see improved productivity or higher profits. Yet, legal is still a necessary requirement for doing business -- especially for staying in business. You may remember this earlier comment: Marcus [Ranum] said "security ROI is dead" and "legislation has made security a cost." He predicted "we will be competing with legal for money (or working for them) in the next five to ten years." To hammer the point Marcus then said "there never was a security ROI." I'd enjoy hearing how corporate lawyers justify their budgets. https://taosecurity.blogspot.com/2006/08/no-roi-for-security-or-legal.html Commentary I remember stories of legal teams winning cases that resulted in money going to their organizations, and in that sense they became profit centers.

Are the Questions Sound? Wednesday, July 11, 2007 Dan Geer, second of the three wise men, was kind enough to share slides from his Measuring Security USENIX class. If I were not teaching at USENIX I would be in Dan's class. One of the slides bothered me -- not for what Dan said, but for what was said to him. The slide is reproduced above, and the notes below: These are precisely the questions that any CFO would want to know and we are not in a good position to answer. The present author was confronted with this list, exactly as it is, by the CISO of a major Wall Street bank with the preface “Are you security people so stupid that you cannot tell me....” This particular CISO came from management audit and therefore was also saying that were he in any other part of the bank, bond portfolios, derivative pricing, equity trading strategies, etc., he would be able to answer such questions to five digit accuracy. The questions are sound. I think Dan is giving the CISO too much credit. I think the questions are "semi-sound," and I think the CISO is the stupid one for using such a negative word to describe one of my Three Wise Men. I'd like to mention several factors which make comparing the world of finance different from the world of digital security. I am recording these because they are more likely the kernel for future developed ideas, but I think they are legitimate points. Business: Digital security is not a line of business. No one practices security to make money. Security is not a productive endeavor; security risk is essentially a tax instantiated by the evil capabilities and intentions of threats. Because security is not a line of business, the performance incentives are not the same as a line of business. Security has no ROI; proper business

initiatives do. Only security vendors make money from security. Accumulation: Digital security, as defined by preserving the confidentiality, integrity, and availability of information, cannot be accumulated. One cannot tap a reserve of security and later replenish it. Data that is exposed to the public Internet can seldom be quashed; data that has been corrupted at time of critical use cannot be changed later, thereby changing the past; and data that was not available at a critical time cannot be made available later, thereby changing the past. This is not the same with capital (i.e., money). Financial institutions are regulated and operated according to capitalization standards that dictate certain amounts of money to cover potential adverse events. Therefore, money can be stored as a counter to riskier behavior or decreased when pursuing less risky activities. Money at a single point in time is also homogenous; the first dollar of $100 is equally valuable as the hundreth dollar of $100. Information resources are not homogenous. Assumptions: Assumptions make financial "five digit accuracy" possible. Consider the assumptions made by the Black-Scholes model, courtesy of Wikipedia, used to price options: The price of the underlying instrument St follows a geometric Brownian motion with constant drift μ and volatility σ: dS_t = \mu S_t\,dt + \sigma S_t\,dW_t \, It is possible to short sell the underlying stock. There are no arbitrage opportunities. Trading in the stock is continuous. There are no transaction costs or taxes. All securities are perfectly divisible (e.g. it is possible to buy 1/100th of a share). It is possible to borrow and lend cash at a constant risk-free interest

rate. The stock does not pay a dividend (see below for extensions to handle dividend payments). The specifics of this equation are not important for this discussion, although those of you who also studied some economics may find plenty of ways to criticize it. (Remember the authors won the Nobel Prize for this equation and paper!) Consider what you could define if digital security practitioners were able to make such assumptions. Accuracy: I just said "assumptions make five digit accuracy possible." This isn't really true. If financial five digit accuracy were possible, no markets could be sustained. Simply put, markets exist because two sides agree to a trade. One side sees the world in one way, and the other sees it differently. (This is why market-makers exist on trading floors. When too many traders see the world the same, market-makers provide liquidity to permit trading.) If trading houses all figure out how to make money with five digit accuracy, their advantage is not going to be sustained because no one will want to trade with anyone else -- they will all want to take the same positions. These are a few thoughts. It would be nice to hear from people with digital security and financial trading experience to provide commentary. Thank you. https://taosecurity.blogspot.com/2007/07/are-questions-sound.html Commentary I always laugh at this post, because at the very time the “CISO of a major Wall Street bank” was calling Dr. Geer and other “security people” “stupid,” his entire industry was on the verge of melting down in the great recession of 2007-2009. It would probably be too much to wish that this CISO worked for Bear Stearns. Of course, he or she could just as easily have worked for a financial institution that was “too big to fail,” and is still smiling about their bailout. The bottom line is that if you are implying that Dr. Geer is “stupid,” you’re the idiot in the room.

Bank Robber Demonstrates Threat Models Saturday, July 14, 2007 This evening I watched part of a show called American Greed that discussed the Wheaton Bandit, an armed bank robber who last struck in December 2006 and was never apprehended. Several aspects of the story struck me. First, this criminal struck 16 times in less than five years, only once being repelled when he was detected en route to a bank and locked out by vigilant tellers. Does a criminal who continues to strike without being identified and apprehended bear resemblance to cyber criminals? Second, the banks did not respond by posting guards on site. Guards tend to aggravate the problem and people get hurt, according to the experts cited on the show. Instead, the banks posted greeters right at the front door to say hello to everyone entering the bank. I've noticed this at my own local branch within the last year, but thought it was an attempt to duplicate Wal-Mart; apparently not. Because the robber also disguises himself with a balaclava, the bank banned customers from wearing hoods, sunglasses, and other clothing that obscures the face in the bank. Third, improved monitoring is helping police profile the criminal. Old bank cameras used tape that was continuously overwritten, resulting in very grainy imagery. Newer monitoring systems are digital and pick up many details of the crime. For example, looking at recent footage the cops noticed the robber "indexing" the gun by keeping his index finger away from the trigger, like we learned in the military or in law enforcement. They also perceived indications he wears light body armor while robbing banks. Finally, one of the more interesting aspects of the show was the reference to a DoJ Bank Robbery document. It contains a chart titled Distinguishing Professional and Amateur Bank Robbers, reproduced as a linked thumbnail at left.

I understand the purpose of the document; it's a way to determine if the robber is an amateur or a professional. This made me consider some recent posts like Threat Model vs Attack Model. A threat model describes the capabilities and intentions of either a professional bank robber or an amateur bank robber. An attack model describes how a robber specifically steals money from a particular bank. Threat models are more generic than attack models, because attack models depend on the nature of the victim. Watching this show reminded me that security is not a new problem. Who has been doing security the longest? The answer is: physical security operators. If we digital security newbies don't want to keep reinventing the wheel, it might make sense to learn more from the physical side of the house. I think convergence of some kind is coming, at least at some level of the management hierarchy. If you argue that the two disciplines are too different to be jointly managed, consider the US military. The key warfighting elements are the Unified Combatant Commands, which can be headed by just about any service member. Some commands were usually led by a general from a certain service, like the Air Force for TRANSCOM, but those arrangements are being unravelled. Despite the huge Army occupation in the Middle East, for example, the next CENTCOM leader is a Naval officer, and so is the next Chairman of the Joint Chiefs. Even the new head of SOCOM is a Navy officer. This amazes me. When I first learned about Joint warfare, the joke was "How do you spell Joint? A-R-M-Y." Now it's N-A-V-Y. For more on this phenomenon, please read Army Brass Losing Influence, which I just found after writing this post. Perhaps we should look to a joint security structure to combine the physical and digital worlds? That would require joint conferences and similar training opportunities. Some history books with lessons for each side would be helpful too. https://taosecurity.blogspot.com/2007/07/bank-robber-demonstratesthreat-models.html Commentary

Banks may not fully understand risk all the time -- and who does -- but their approach to physical security is always illuminating and worth studying.

No ROI? No Problem Saturday, July 14, 2007 I continue to be surprised by the confusion surrounding the term Return on Investment (ROI). The Wikipedia entry for Rate of Return treats ROI as a synonym, so it's a good place to go if you want to understand ROI as anyone who's taken introductory corporate finance understands it. In its simplest form, ROI is a mechanism used to choose projects. For example, assume you have $1000 in assets to allocate to one of three projects, all of which have the same time period and risk. Invest $1000. Project yields $900 (-10% ROI) Invest $1000. Project yields $1000 (0% ROI) Invest $1000. Project yields $1100 (10% ROI) Clearly, the business should pursue project 3. Businesspeople make decisions using this sort of mindset. I am no stranger to this world. Consider this example from my consulting past, where I have to choose which engagement to accept for the next week. Spend $1000 on travel, meals, and other expenses. Project pays $900 (-10% ROI) Spend $1000 on travel, meals, and other expenses. Project pays $1000 (0% ROI) Spend $1000 on travel, meals, and other expenses. Project pays $1100 (10% ROI) Obviously this is the same example as before, but using a real-world scenario.

The problem the "return on security investment" (ROSI) crowd has is they equate savings with return. The key principle to understand is that wealth preservation (saving) is not the same as wealth creation (return). Assume I am required to obtain a license to perform consulting. If I buy the license before 1 January it costs $500. If I don't meet that deadline the license costs $1000. Therefore, if I buy the license before 1 January, I have avoided a $500 loss. I have not earned $500 as a result of this "project." I am not $500 richer. I essentially bought the license "on sale" compared to the post-1 January price. Does this mean buying the license before 1 January is a dumb idea because I am not any richer? Of course not! It's a smart idea to avoid losses when the cost of avoiding that loss is equal to or less than the value of the asset being protected. For example, what if I had to pay $600 to get a plane ticket from a faraway location to appear in person in my county to buy the license before 1 January? In that case, I should just pay the $1000 license fee later. For a $500 plane ticket, the outcome doesn't matter either way. For a $400 plane ticket, I should fly and appear in person. Again, in none of these situations am I actually richer. No wealth is being created, only preserved. There is no ROI, only potential savings. What if I chose to avoid paying for a license altogether, hoping no one catches me? I've saved even more money -- $500 compared to the pre-1 January price, and $1000 compared to the post-1 January price. This is where the situation becomes more interesting, and this is where subjectivity usually enters the picture concerning expected outcomes. Let's get back to ROI. The major problem the ROSI crowd has is they are trying to speak the language of their managers who select projects based on ROI. There is no problem with selecting projects based on ROI, if the project is a wealth creation project and not a wealth preservation project. Security managers should be unafraid to avoid using the term ROI, and instead say "My project will cost $1,000 but save the company $10,000." Saving money / wealth preservation / loss avoidance is good.

Another problem most security managers will encounter is their inability to definitively say that their project will indeed save a certain amount of money. This is not the case for licensing deals, e.g., "Switching from Vendor X's SSL VPN to Vendor Y's SSL VPN will save $10,000" because the outcome is certain, breach of contract notwithstanding. Certainty or even approximate probability is a huge hurdle for many security projects because of several factors: Asset value is often undetermined; in some cases, assets themselves are not even inventoried Vulnerabilities in assets are unknown, because new flaws are discovered every day The threat cannot be properly assessed, because they are unpredictable and creative As a result, risk assessment is largely guesswork. Guesswork means the savings can be just about anything the security manager chooses to report. If you look at my older posts on return on security investment you'll see some more advice on how to make your case for security spending without using the term "ROI". It should be clear by now that ROSI or security ROI is nothing more than warping a defined business term to get attention during budget meetings. I saw the exact same problem in the Air Force. At one point those who flew combat missions were called "operators." Once Information Operations came into vogue, that community wanted to be called "operators" too. At one point a directive came down that intel folks like me were now "operators," just like combat pilots. That lasted about 10 minutes, because suddenly the combat pilots started using the term "trigger-pullers." "Fine," they thought. "Call yourselves operators. We pull triggers." Back to square one. The bottom line is that security saves money; it does not create money. https://taosecurity.blogspot.com/2007/07/no-roi-no-problem.html

Commentary TL;DR: “The key principle to understand is that wealth preservation (saving) is not the same as wealth creation (return). Assume I am required to obtain a license to perform consulting. If I buy the license before 1 January it costs $500. If I don't meet that deadline the license costs $1000. Therefore, if I buy the license before 1 January, I have avoided a $500 loss. I have not earned $500 as a result of this "project." I am not $500 richer. I essentially bought the license "on sale" compared to the post-1 January price.” Amen.

Security ROI Revisited Sunday, July 15, 2007 One of you responded to my No ROI? No Problem post with this question: Just read your ROI blog, which I found very interesting. ROI is something I've always tried to put my finger on, and you present an interesting approach. Question: Is it not possible to 'make' money with security, or does it still come down to savings? Example: - A hospital implements a security system that allows doctors to access patient data from anywhere. Now, instead of doing 10 patients a day they can do (and charge) 13 patients a day. I'm not trying to sharp shoot you in anyway, I'm just trying to better understand the economics. This is an excellent question. This is exactly the same concept as I stated in my August 2006 post Real Technology ROI. In this case, doctors are more productive at accessing patient data by virtue of a remote access technology. This is like installing radios for faster dispatch in taxis. In both cases security is not causing a productivity gain but security can be reasonably expected as a property of a properly designed technology. In other words, it's the remote access technology that provides a productivity gain, and doctors should expect that remote access to be "secure." In a taxi, the radio technology provides a productivity gain, and drivers should expect that system to be "secure." I'm sure that's not enough to convince some of you out there. My point is you must identify the activity that increases productivity -- and security will not be it. Don't believe me? Imagine the remote access technology is a marvel of security. It has strong encryption, authorization, authentication, accountability, endpoint control, whatever you could possibly imagine to preserve the CIA triad. Now consider what happens if, for some reason,

doctors are less productive using this system. How could that happen? The system is secure! Maybe the doctors all decide to spend tons more time looking at patient records so their "throughput" declines. Who knows -- the point is that security had nothing to do with this result; it's the business activity that increases (or in this example, decreases) that determines ROI. What does this mean for security projects? They still don't have ROI. However, and this is a source of trouble and opportunities, security projects can be components of productivity enhancing projects that do increase ROI. This is why the Chief Technology Officer (CTO) can actually devise ROI for his/her projects. As a security person, you would probably have more success in budget meetings if you can tie your initiatives to ROI-producing CTO projects. Wait a minute, some of you are saying. How about this example: if a consumer can choose between two products (one that is "secure" and one that is not), won't choosing the "secure" model mean that security has a ROI, because the company selling the secure version might beat the competition? In this case, remember that the consumer is not buying security; the consumer is buying a product that performs some desired function, and security is an "enabler" (to use a popular term). If the two products are functionally equivalent and the same price, buying the "secure" version is a no-brainer because, even if the risk is exceptionally small, "protecting" against that risk is cost free. If the "secure" version is more expensive, now the consumer has to remember his/her CISSP stuff, like Annualized Rate of Occurrence (ARO) and Single Loss Expectancy (SLE) to devise an Annual Loss Expectancy (ALE), where ARO * SLE = ALE You then compare your ALE to the cost differential and decide if it's worth paying the extra amount for the "secure" product. For those of you who still resist me, it's as simple as this: security is almost always concerned with stopping bad events. When you stop a bad event, you avoid a loss. Loss avoidance means savings, but no business can stay in business purely by saving money. If you don't understand that you will never be able to understand anything else about this subject. You should

also not run a business. The reason why you should pursue projects that save money is that those projects free resources to be diverted to projects with real ROI. Those of you who have studied some economics may see I am getting close to Frédéric Bastiat's Broken Window fallacy, briefly described by Russell Roberts thus: Bastiat used the example of a broken window. Repairing the window stimulates the glazier’s pocketbook. But unseen is the loss of whatever would have been done with the money instead of replacing the window. Perhaps the one who lost the window would have bought a pair of shoes. Or invested it in a new business. Or merely enjoyed the peace of mind that comes from having cash on hand. Spending money on security breaches is repairing a broken window. Spending money to prevent security breaches is like hiring a guard to try to prevent a broken window. In either case, it would have been more productive to be able to invest either amount of money, and a wise investment would have had a positive ROI. This is why we do not spend time breaking and repairing windows for a living in rich economies. However, like all my posts on this subject, I am not trying to argue against security. I am a security person, obviously. Rather, I am arguing against those who warp security to fit their own agenda or the distorted worldview of their management. For an alternative way to talk to management about security, I recommend returning to my post Risk-Based Security is the Emperor's New Clothes where I cite Donn Parker. https://taosecurity.blogspot.com/2007/07/security-roi-revisited.html Commentary TL;DR: “[S]ecurity is almost always concerned with stopping bad events. When you stop a bad event, you avoid a loss. Loss avoidance means savings, but no business can stay in business purely by saving money... Spending money on security breaches is repairing a broken window.

Spending money to prevent security breaches is like hiring a guard to try to prevent a broken window. In either case, it would have been more productive to be able to invest either amount of money, and a wise investment would have had a positive ROI. This is why we do not spend time breaking and repairing windows for a living in rich economies.” I was not the first person to invoke the broken window fallacy to information security, and the concept remains relevant today.

Glutton for ROI Punishment Friday, July 20, 2007 My previous posts No ROI? No Problem and Security ROI Revisited have been smash hits. The emphasis here is on "smash." At the risk of being branded a glutton for ROI punishment, I present one final scenario to convey my thoughts on this topic. I believe there may be some room for common ground. I am only concerned with the Truth as well as we humans can perceive it. With that, once more unto the breach. It's 1992. Happy Corp. is a collaborative advertisement writing company. A team of writers develop advertisement scripts for TV. Writers exchange ideas and such via hard copy before finalizing their product. Using these methods the company creates an average of 100 advertisement scripts per month, selling them for $1,000 each or a total of $100,000 per month. Happy's IT group proposes Project A. Project A will cost $10,000 to deploy and $1,000 per month to sustain. Project A will provide Happy with email accounts for all writers. As a result of implementing Project A, Happy now creates an average of 120 scripts per month. The extra income from these scripts results in recouping the deployment cost of Project A rapidly, and the additional 20 scripts per month is almost all profit (minus the new $1,000 per month charge for email). Now it's 1993, and Happy Corp. faces a menace -- spam. Reviewing and deleting spam emails lowers Happy's productivity by wasting writer time. Instead of creating 120 scripts per month, Happy's writers can only produce 110 scripts per month. Happy's security group proposes Project B. Project B will cost $10,000 to deploy and $1,000 per month to sustain. (Project B does not replace Project A.) Project B will filter Happy's email to eliminate spam. As a result of implementing Project B, Happy returns to creating an average of 120 scripts per month. Profits have increased but they do not return to the level enjoyed by the pre-spam days, due to the sustainment cost of Project B.

I would say Project A provides a true return on investment. I would say Project B avoids loss, specifically the productivity lost by wasting time deleting spam. I could see how others could make an argument that Project B is a productivity booster, since it does return productivity to the levels seen in the pre-spam days. That is the common ground I hope to achieve with this explanation. I do not consider that a true productivity gain because the productivity is created by the email system Project A, but I can accept others see this differently. I think this example addresses the single biggest problem I have seen in so-called "security ROI" proposals: the failure to tie the proposed security project to a revenue-generating business venture. In short, security for "security's sake" cannot be justified. In my scenario I am specifically stating that the company is losing revenue of 10 scripts per month because of security concerns, i.e., spam. By spending money on spam filtering, that loss can be avoided. Assuming the overall cost of Project B is less than or equivalent to the revenue of those lost 10 scripts per month, implementing Project B makes financial sense. What do you think? https://taosecurity.blogspot.com/2007/07/glutton-for-roi-punishment.html Commentary I tried to reach out to critics here. It was probably a waste of time, but one never knows!

Is Digital Security "Risk" a Knightian Uncertainty? Saturday, September 01, 2007 I've subscribed to the Economist for over ten years, and it's been worth every penny. Today I noticed the following in an article called The Long and Short of It: The second paper suggests that traders face “Knightian uncertainty”, or risks that cannot be measured. Hmm, what is this "Knightian uncertainty"? I found the following excerpt from Risk, Uncertainty and Expected Utility: Much has been made of Frank H. Knight's (1921: p.20, Ch.7) famous distinction between "risk" and "uncertainty". In Knight's interpretation, "risk" refers to situations where the decision-maker can assign mathematical probabilities to the randomness which he is faced with. In contrast, Knight's "uncertainty" refers to situations when this randomness "cannot" be expressed in terms of specific mathematical probabilities. As John Maynard Keynes was later to express it: "By `uncertain' knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty...The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence... About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know." (J.M. Keynes, 1937) Nonetheless, many economists dispute this distinction, arguing that Knightian risk and uncertainty are one and the same thing. For instance, they argue that in Knightian uncertainty, the problem is that the agent

does not assign probabilities, and not that she actually cannot, i.e. that uncertainty is really an epistemological and not an ontological problem, a problem of "knowledge" of the relevant probabilities, not of their "existence". Going in the other direction, some economists argue that there are actually no probabilities out there to be "known" because probabilities are really only "beliefs". In other words, probabilities are merely subjectively-assigned expressions of beliefs and have no necessary connection to the true randomness of the world (if it is random at all!). Nonetheless, some economists, particularly Post Keynesians such as G.L.S. Shackle (1949, 1961, 1979) and Paul Davidson (1982, 1991) have argued that Knight's distinction is crucial. In particular, they argue that Knightian "uncertainty" may be the only relevant form of randomness for economics - especially when that is tied up with the issue of time and information. In contrast, situations of Knightian "risk" are only possible in some very contrived and controlled scenarios when the alternatives are clear and experiments can conceivably be repeated -- such as in established gambling halls. Knightian risk, they argue, has no connection to the murkier randomness of the "real world" that economic decision-makers usually face: where the situation is usually a unique and unprecedented one and the alternatives are not really all known or understood. In these situations, mathematical probability assignments usually cannot be made. Thus, decision rules in the face of uncertainty ought to be considered different from conventional expected utility. The Wikipedia entry on Uncertainty is also interesting. That is really fascinating. It sounds like a school of thought believes that the real world may be too complex to model. It also sounds like stepping foot into the world of appreciating uncertainty is a huge undertaking, given the amount of prior research. https://taosecurity.blogspot.com/2007/09/is-digital-security-risk-

knightian.html Commentary It sounds like Knightian "risk" is still applicable to the security world today.

Vulnerabilities in Perspective Friday, July 18, 2008 It's been nine days since Dan Kaminsky publicized his DNS discovery. Since then, we've seen a Blackberry vulnerability which can be exploited by a malicious .pdf, a Linux kernel flaw which can be remotely exploited to gain root access, Kris Kaspersky promising to present Remote Code Execution Through Intel CPU Bugs this fall, and David Litchfield reporting "a flaw that, when exploited, allows an unauthenticated attacker on the Internet to gain full control of a backend Oracle database server via the front end web server." That sounds like a pretty bad week! It's bad if you think of R only in terms of V and forget about T and A. What do I mean? Remember the simplistic risk equation, which says Risk = Vulnerability X Threat X Asset value. Those vulnerabilities are all fairly big V's, some bigger than others depending on the intruder's goal. However, R depends on the values of T and A. If there's no T, then R is zero. Verizon Business understood this in their post DNS Vulnerability Is Important, but There’s No Reason to Panic: Cache poisoning attacks are almost as old as the DNS system itself. Enterprises already protect and monitor their DNS systems to prevent and detect cache-poisoning attacks. There has been no increase in reports of cache poisoning attacks and no reports of attacks on this specific vulnerability... The Internet is not at risk. Even if we started seeing attacks immediately, the reader, Verizon Business, and security and network professionals the world-over exist to make systems work and beat the outlaws. We’re problemsolvers. If, or when, this becomes a practical versus theoretical problem, we’ll put our heads together and solve it. We shouldn’t lose our heads now. However, this doesn’t mean we discount the potential severity of this vulnerability. We just believe it deserves a place on our To-Do lists. We do

not, at this point, need to work nights and weekends, skip meals or break dates any more than we already do. And while important, this isn’t enough of an excuse to escape next Monday’s budget meeting. It also doesn’t mean we believe someone would be silly to have already patched and to be very concerned about this issue. Every enterprise must make their own risk management decisions. This is our recommendation to our customers. In February of 2002, we advised customers to fix their SNMP instances due to the BER issue discovered by Oulu University, but there have been no widespread attacks on those vulnerabilities for nearly six years now. We were overly cautious. We also said the Debian RNG issue was unlikely to be the target of near-term attacks and recommended routine maintenance or 90 days to update. So far, it appears we are right on target. There have been no increase in reports of cache poisoning attempts, and none that try to exploit this vulnerability. As such, the threat and the risk are unchanged. I think the mention of the 2002 SNMP fiasco is spot on. A lot of us had to deal with people running around thinking the end of the world had arrived because everything runs SNMP, and everything is vulnerable. It turns out hardly anything happened at all, and we were watching for it. Halvar Flake was also right when he said: I personally think we've seen much worse problems than this in living memory. I'd argue that the Debian Debacle was an order of magnitude (or two) worse, and I'd argue that OpenSSH bugs a few years back were worse. Looking ahead, I thought this comment on the Kaspersky CPU attacks was interesting: CPU Bug Attacks: Are they really necessary?: But every year, at every security conference, there are really interesting presentations and lots of experienced people talking about theoretically serious threats. But this doesn't necessarily mean that an exposed PoC will become a serious threat in the wild. Many of these PoCs require high levels of skill (which most malware authors do not

have) to actually make them work in other contexts. And, I feel sorry to say this, but being in the security industry my thoughts are: do malware writers really need to develop highly complex stuff to get millions of pcs infected? The answer is most likely not. I think that insight applies to the current DNS problems. Are those seeking to exploit vulnerable machines so desperate that they need to leverage this new DNS technique (whatever it is)? Probably not. At the end of the day, those of us working in production networks have to make choices about how we prioritize our actions. Evidence-based decisionmaking is superior to reacting to the latest sensationalist news story. If our monitoring efforts demonstrate the prevalence of one attack vector over another, and our systems are vulnerable, and those systems are very valuable, then we can make decisions about what gets patched or mitigated first. https://taosecurity.blogspot.com/2008/07/vulnerabilities-inperspective.html Commentary 2008 was a crazy time in security, but so was every other year.

More Threat Reduction, Not Just Vulnerability Reduction Thursday, August 14, 2008 Recently I attended a briefing were a computer crimes agent from the FBI made the following point: Your job is vulnerability reduction. Our job is threat reduction. In other words, it is beyond the legal or practical capability of most computer crime victims to investigate, prosecute, and incarcerate threats. Therefore, we cannot independently influence the threat portion of the risk equation. We can play with the asset and vulnerability aspects, but that leaves the adversary free to continue attacking until they succeed. Given that, it is disappointing to read State AGs Fail to Adequately Protect Online Consumers. I recommend reading that press release from the Center for American Progress and Center for Democracy and Technology for details. I found this recommendation on p 25 interesting: Consumers are paying a steep price for online fraud and abuse. They need aggressive law enforcement to punish perpetrators and deter others from committing Internet crime. A number of leading attorneys general have shown they can make a powerful difference. But others must step up as well. To protect consumers and secure the future of the Internet, we recommend that state attorneys general take the following steps... Develop computer forensic capabilities. Purveyors of online fraud and abuse — and the methods they use — are often extremely difficult to detect. Computer forensics are thus needed to trace and catch Internet fraudsters. Attorneys general in Washington and New York invested in computer forensics and, as a result, were able to prosecute successful cases against spyware. Most states, however, have little in the way of

computer forensic capability. Developing this capability may not require substantial new funds. Rather, most important are human and intellectual resources. Even New York’s more intensive adware investigations, for instance, were done with free or low-cost software, which, among other things, captured screenshots, wiped hard drives, and tracked IP addresses and installation information through “packet sniffing” tools. Attorneys general must make investments in human capital so that such software can be harnessed and put to use. When I teach, there are a lot of military people in my classes. The rest come from private companies. I do not see many law enforcement or other legal types. I'm guessing they do not have the funds or the interest? https://taosecurity.blogspot.com/2008/08/more-threat-reduction-notjust.html Commentary “Your [industry] job is vulnerability reduction. Our [law enforcement] job is threat reduction.” Indeed!

Unify Against Threats Tuesday, October 28, 2008 At my keynote at the 2008 SANS Forensics and IR Summit I emphasized the need for a change in thinking among security practitioners. Too often security and IT groups have trouble relating to other stakeholders in an organization because we focus on vulnerabilities. Vulnerabilities are inherently technical, and they mean nothing to others who might also care about security risks, like human resources, physical security, audit staff, legal staff, management, business intelligence, and others. My point is that security people should stop framing our problems in terms of vulnerabilities or exploits when speaking with anyone outside our sphere of influence. Rather, we should talk in terms of threats. This focuses on the who and not the what or how. This requires a different mindset and a different data set. The business should create a strategy for dealing with threats, not with vulnerabilities or exploits. Notice I said "business" and not "security team." Creation of a business-wide strategy should be done as a collaborative effort involving all stakeholders. By keeping the focus on the threats, each stakeholder can develop detective controls and countermeasures as they see fit -- but with a common adversary in mind. HR can focus on better background checks; physical security on guns and guards; audit staff on compliance; legal staff on policies; BI on suspicious competitor activities, and so on. You know you are making progress when management asks "how are we dealing with state-sponsored competitors" instead of "how are we dealing with the latest Microsoft vulnerability?" This doesn't mean you should ignore vulnerabilities. Rather, the common strategy across the organization should focus on threats. When it comes to countermeasures in each team, then you can deal with vulnerabilities and the effect of exploits. Note that focusing on threats requires real all-source security intelligence.

You don't necessarily need to contract with a company like iDefense, one of the few that do the sort of research I suggest you need. This isn't a commercial for iDefense and I don't contract with them, but their topical research reporting is an example of helpful (commercial) information. I would not be surprised, however, to find you already have a lot of the background you need already held by the stakeholders in the organization. Unifying against the threats is one way to bring these groups together. https://taosecurity.blogspot.com/2008/10/unify-against-threats.html Commentary Within a few years “threat hunting” and “threat intelligence” became all the rage. That was a welcome development.

Risk Assessment, Physics Envy, and False Precision Wednesday, May 06, 2009 Longtime blog readers might remember a thread from 2007 which ended with Final Question on FAIR, where I was debating the value of numerical outputs from so-called "risk assessments." Last weekend I attended the 2009 Berkshire Hathaway Shareholder meeting courtesy of Gunnar Peterson. He mentioned two terms used by Berkshire's Charlie Munger that now explains the whole numerical risk assessment approach perfectly: Physics Envy, resulting in false precision: In October of 2003 Charlie Munger gave a lecture to the economics students at the University of California at Santa Barbara in which he discussed problems with the way that economics is taught in universities.One of the problems he described was based on what he called "Physics Envy." This, Charlie says, is "the craving for a false precision. The wanting of formula..." The problem, Charley goes on, is, "that it's not going to happen by and large in economics. It's too complex a system. And the craving for that physics-style precision does nothing but get you in terrible trouble..." When you combine Physics Envy with Charley's "man with a hammer syndrome," the result is the tendency for people to overweight things that can be counted. "This is terrible not only in economics, but practically everywhere else, including business; it's really terrible in business -- and that is you've got a complex system and it spews out a lot of wonderful numbers [that] enable you to measure some factors. But there are other factors that are terribly important. There's no precise numbering where you can put to these factors. You know they're important, you don't have

the numbers. Well practically everybody just overweighs the stuff that can be numbered, because it yields to the statistical techniques they're taught in places like this, and doesn't mix in the hard-to-measure stuff that may be more important... As Charley says, this problem not only applies to the field of economics, but is a huge consideration in security analysis. Here it can give rise to the "man with a spread sheet syndrome" which is loosely defined as, "Since I have this really neat spread sheet it must mean something..." To the man with a spread sheet this looks like a mathematical (hard science) problem, but the calculation of future cash flows is more art than it is hard science. It involves a lot of analysis that has nothing to do with numbers. In a great many cases (for me, probably most cases) involves a lot of guessing. It is my opinion that most cash flow spread sheets are a waste of time because most companies do not really have a predictable future cash flow.” You could literally remove any references to financial issues and replace them with risk assessments to have the same exact meaning. What's worse, people who do so-called "risk assessments" are usually not even using real numbers, as would be the case with cash flow analysis! Physics envy, leading to false precision, are two powerful ideas I intend to carry forward. https://taosecurity.blogspot.com/2009/05/risk-assessment-physics-envyand-false.html Commentary Physics envy is a powerful concept. Whenever you’re seeking numbers to describe a situation, keep it in mind. Numbers are important but do not force quantification, or actions based on numbers, where they do not apply.

Attack Models in the Physical World Thursday, August 13, 2009 A few weeks ago I parked my Ford Explorer (It's not a clunker!!) in a parking garage. On the way out I walked by a pipe. It looked like a pipe for carrying a fluid (water maybe?), "protected" by a metal frame. I think the purpose of the cage is pretty clear. It's deployed to prevent drivers from inadvertently ramming the pipe with their front or rear car bumpers. However, think of all the "attacks" for which it is completely unsuited. Here are the first five I could imagine. Defacement, like painting obscenities on the pipe Cutting the pipe with a saw Melting the pipe with a flame Cracking the pipe with a hammer Stealing water by creating a hole and tube to fill a container So what if any of these attacks were to happen? Detection and response are my first answers. There's likely a camera somewhere that could see me, my car, and the pipe. Cameras or bystanders are likely to record some detail that would cause the intruder to be identified and later apprehended. Other people in the parking garage are likely to tell someone in authority, or better still, take video or a photo of the intruder in action and then provide that to someone in authority. So, we can all laugh at the metal cage around this pipe, but it's probably doing just what it needs to do, given the amount of resources available for "defense" and the detection and response "controls" available. If the defensive posture changed, it would probably not be the result of a security person imagining different attack models against plastic pipes. In

other words, it wouldn't be only "decide -> act". Rather, changes would be prompted by observed attacks against real infrastructure. We'd have the full "observe -> orient -> decide -> act" OODA loop. For example, some joker would be seen cutting the pipe using a saw, so patrols and cameras would be enhanced, and possibly wire mesh or plating would be added to the cage to slow down the attacker in time for responders to arrive. https://taosecurity.blogspot.com/2009/08/attack-models-in-physicalworld.html Commentary The use of the term “clunker” refers to the so-called “cash for clunkers” program, an Obama-era response to the Great Recession of 2007-2009 that incentivized owners of old cars to sell them to government agents. The new owners would then pour sand into the engines, effectively taking them off the market. At the time I drove a 1996 Ford Explorer. I could not possibly imagine my beloved V8 choking on sand, so I kept driving that beauty for another several years, until late 2010.

Conclusion I hope the posts in this chapter helped make my point that I believe “risk” is useful in some situations, but quantifying it in an environment filled with adaptive, sentient adversaries is not as useful as modeling losses due to automobile collisions. I would also be pleased if you decided that it was important to use correct and precise terms when discussing elements of the risk equation, such as “threat” and “vulnerability.”

Chapter 4. Advice

Introduction In this section I grouped questions for blog readers, or students in my classes, or clients with whom I consulted.

CISSP: Any Value? Tuesday, June 21, 2005 A few of you wrote about a story by Thomas Ptacek in response to my recent CISSP exam post. Tom has one of the best minds in the security business, and I value his opinions. Here are my thoughts on the CISSP and an answer to Tom's blog. (I did not realize Tom has despised the CISSP for so long!) On page 406 of my first book I wrote: "I believe the most valuable certification is the Certified Information Systems Security Professional (CISSP). I don't endorse the CISSP certification as a way to measure managerial skills, and in no way does it pretend to reflect technical competence. Rather, the essential but overlooked feature of the CISSP certification is its Code of Ethics... This Code of Ethics distinguishes the CISSP from most other certifications. It moves security professionals who hold CISSP certification closer to attaining the true status of 'professionals.'" In my book I compared the CISSP Code of Ethics to the National Society of Professional Engineers (NSPE) Code of Ethics for Engineers, which I first wrote about two years ago. The second point of the NSPE code is "Perform services only in areas of their competence." This is similar to the following CISSP code excerpt: "Provide diligent and competent service to principals." My book made this comment: "I find the second point especially relevant to security professionals. How often are we called upon to implement technologies or policies with which we are only marginally proficient? While practicing computer security does not yet bear the same burden as building bridges or skyscrapers, network

engineers will soon face responsibilities similar to physical engineers." Given this background, from where does the CISSP's value, if any, derive? I believe the answer lies in the values one wants to measure. First, the CISSP and other "professional" certifications are not designed to convey information about the holder to other practitioners. Rather, certifications are supposed to convey information to less informed parties who wish to hire or trust the holder. The hiring party believes that the certifying party (like ISC2) has taken steps to ensure the certification holder meets the institution's standards. Second, I would argue the CISSP is not, or at least should not, be designed or used to test technical competence. Certifications like the CCNA are purely technical, and I believe they do a good job testing technical competence. The CCNA has no code of ethics. I severely doubt the ability of anyone without hands-on Cisco experience to cram for the CCNA and pass. Even many of those who attend a boot camp with little or no previous handson experience usually fail. Third, there is nothing wrong with stating what would seem obvious. Tom reduces his argument against the CISSP Code of Ethics to the title of his blog entry: "Don't Be Evil." I agree, and I do not see the problem with expanding on that idea as the CISSP's Code of Ethics does. So, what is wrong with the CISSP? I previously posted thoughts on credible certifications as described by Peter Stephenson and Peter Denning. Here are Stephenson's criteria, with my assessment of the CISSP. Keep in mind I think the CISSP should be a certification reflecting security principles, not technical details. It is based upon an accepted common body of knowledge that is well understood, published and consistent with the objectives of the community applying it. No. The CISSP CBK looks barely acceptable on the surface, but in practice it fails miserably to reflect issues security professionals actually handle. It requires ongoing training and updating on new developments in the field. Partially. The CISSP CPE requirements ensure holders need to receive

training prior to renewal, but I am not sure this equals exposure to new developments. If you attend Tom's Black Hat talk, you get 16 Continuing Professional Education (CPE) credits! :) There is an examination (the exception is grandfathering, where extensive experience may be substituted). Yes. Experience is required. Yes. Experience is required for the CISSP, mainly in response to this 2002 story of a 17-year-old receiving his CISSP. Grandfathering is limited to a brief period at the time of the founding of the certification. I am not sure why this matters, other than Stephenson needed to justify his involvement in the CIFI forensics certification. It is recognised in the applicable field. Well, the CISSP is certainly recognized. Unfortunately it is often mis-recognized as a technical cert, when it should be strictly a symbol of adherence to professional conduct. It is provided by an organization or association operating in the interests of the community, usually non-profit, not a training company open to independent peer review. Partially. I began to worry when I saw ISC2 offer $2500 review seminars, and now they have the Official (ISC)2 Guide to the CISSP Exam, pictured above. I am not convinced this element matters that much anyway, as I think Cisco's certification program is excellent. I think the root of the problem is the concept that the CISSP somehow measures technical competence. The CISSP in no way measures technical skills. Rather, it should measure knowledge of security principles. It does not meet that goal, either. At this point we are left with a certification that only provides a code of ethics. That brings us back to my original point. From a practical point of view, I obtained my CISSP four years ago to help pass corporate human resource departments who screen resumes. Back then I had two choices when looking for employment. I could either work through a friend who knew my skills, or I could submit a resume to a company with an HR department. Rather than rely completely on the former, I decided to keep the latter as an option. Getting through HR departments usually required a CISSP certification.

Does this mean I will renew my CISSP when it expires? I am not sure. If I see improvements in the certification, such that it reflects security principles, I may. If it continues to fail in that respect, I probably will not. What are your plans? Why or why not do you pursue the CISSP? https://taosecurity.blogspot.com/2005/06/cissp-any-value-few-of-youwrote-me.html Commentary I wrote this an another post, but it merits repeating here as modern commentary: Reviewing the [CISSP ethics] "code," as it appears now, shows the following: "There are only four mandatory canons in the Code. By necessity, such high-level guidance is not intended to be a substitute for the ethical judgment of the professional. Code of Ethics Preamble: The safety and welfare of society and the common good, duty to our principals, and to each other, requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior. Therefore, strict adherence to this Code is a condition of certification. Code of Ethics Canons: Protect society, the common good, necessary public trust and confidence, and the infrastructure. Act honorably, honestly, justly, responsibly, and legally. Provide diligent and competent service to principals. Advance and protect the profession."

This is almost worthless. The only actionable item in the "code" is the word "legally," implying that if a CISSP holder was convicted of a crime, he or she could lose their certification. Everything else is subject to interpretation. Contrast that with the USAFA Code of Conduct: "We will not lie, steal, or cheat, nor tolerate among us anyone who does." While it still requires an Honor Board to determine if a cadet has lied, stolen, cheated, or tolerated, there's much less gray in this statement of the Academy's ethics. Is it perfect? No. Is it more actionable than the CISSP's version? Absolutely.

My Criteria for Good Technical Books Thursday, July 07, 2005 I was recently asked if I would review an upcoming book. In my reply, I listed four criteria I use when making my review evaluations. Accuracy. If a book contains several large or numerous small technical errors, I will lower my rating. I may stop reading entirely if I lose confidence in the author's capacity to deliver reliable information. This is a problem if I am reading a book outside my core expertise. Originality. I really dislike reading books that cover material already published elsewhere. I do not mind some repetition if the result makes sense, but in most cases authors should just start covering new material. For example, I would prefer a new book on network attack and defense to avoid explaining TCP/IP. Authors: if a book explaining your introductory material already exists, cite that title and present your new material in your book. Brian Carrier's book is a great example of how to make me happy. He doesn't bother explaining security; he sets up the reader with citations and then starts explaining file systems. Awesome. Candor. I cannot stand books that claim to cover one topic and then completely fail to do so. I must name names here to make my point: Scene of the Cybercrime: Computer Forensics Handbook spends over 540 pages on generic security issues before finishing with two chapters on what can only loosely be called forensics. Check the Table of Contents to see what I mean. That book pales in comparison with Incident Response, 2nd Ed. Implementation details. I like to hear good security theory and techniques. However, if the author doesn't tell me how to implement this advice, I question why he or she bothered to mention it. I do not demand examples of every scenario. For example, I become suspicious when I read a chapter titled "securing servers," but never see a single invocation of command line syntax. Some reviewers of my latest book want me to address networking configuration outside of Cisco-land. I don't have the time,

expertise, or equipment to cover Juniper, Foundry, and so on, but my Cisco examples should make the point clear. What makes you like a technical book? My favorite ten books of the past ten years are listed at Bookpool, and those ten meet my criteria. https://taosecurity.blogspot.com/2005/07/my-criteria-for-good-technicalbooks-i.html Commentary This is a good reminder to me to set guidelines for any books I review, regardless of the genre.

What the CISSP Should Be Saturday, August 27, 2005 Today I saw a new comment on my criticism of the ISC2's attempt to survey members on "key input into the content of the CISSP® examination." Several of you have asked what I would recommend the Certified Information Systems Security Professional (CISSP) exam should cover. I have a very simple answer: NIST SP 800-27, Rev. A. This document, titled Engineering Principles for Information Technology Security (A Baseline for Achieving Security), is almost exactly what a socalled "security professional" should know. The document presents 33 "IT Security Principles," divided into 6 categories. These principles represent sound security theories. For future reference and to facilitate discussion, here are those 33 principles. Security Foundation Principle 1. Establish a sound security policy as the “foundation” for design Principle 2. Treat security as an integral part of the overall system design. Principle 3. Clearly delineate the physical and logical security boundaries governed by associated security policies. Principle 4. Ensure that developers are trained in how to develop secure software. Risk Based Principle 5. Reduce risk to an acceptable level. [Note: It does not say "eliminate risk;" smart.] Principle 6. Assume that external systems are insecure. ["External"

here means systems not under your control.] Principle 7. Identify potential trade-offs between reducing risk and increased costs and decrease in other aspects of operational effectiveness. [The wording is poor. The idea is to identify situations where information owners decide to accept risks in order to satisfy other operational requirements.] Principle 8. Implement tailored system security measures to meet organizational security goals. Principle 9. Protect information while being processed, in transit, and in storage. Principle 10. Consider custom products to achieve adequate security. Principle 11. Protect against all likely classes of "attacks." Ease of Use Principle 12. Where possible, base security on open standards for portability and interoperability. Principle 13. Use common language in developing security requirements. [In other words, definitions matter.] Principle 14. Design security to allow for regular adoption of new technology, including a secure and logical technology upgrade process. Principle 15. Strive for operational ease of use. Increase Resilience Principle 16. Implement layered security (Ensure no single point of vulnerability). Principle 17. Design and operate an IT system to limit damage and to be resilient in response.

Principle 18. Provide assurance that the system is, and continues to be, resilient in the face of expected threats. Principle 19. Limit or contain vulnerabilities. Principle 20. Isolate public access systems from mission critical resources (e.g., data, processes, etc.). Principle 21. Use boundary mechanisms to separate computing systems and network infrastructures. Principle 22. Design and implement audit mechanisms to detect unauthorized use and to support incident investigations. [In other words, from the network side, this means network security monitoring.] Principle 23. Develop and exercise contingency or disaster recovery procedures to ensure appropriate availability. Reduce Vulnerabilities Principle 24. Strive for simplicity. Principle 25. Minimize the system elements to be trusted. Principle 26. Implement least privilege. [Note: The text also recommends "separation of duties." Principle 27. Do not implement unnecessary security mechanisms. Principle 28. Ensure proper security in the shutdown or disposal of a system. Principle 29. vulnerabilities.

Identify

and

prevent

common

errors

and

Design with Network in Mind Principle 30. Implement security through a combination of measures

distributed physically and logically. Principle 31. Formulate security measures to address multiple overlapping information domains. Principle 32. Authenticate users and processes to ensure appropriate access control decisions both within and across domains. Principle 33. Use unique identities to ensure accountability. Given these principles, the next step is to devise practices or techniques for each. For example, Principle 26 states "Implement least privilege." Practices or techniques include (but are not limited to) the following, which represent my own thoughts; NIST does not reach to this level: Create groups which provide functions needed to meet an operational requirement. Operate mechanisms which allow temporary privilege escalation to accomplish specific tasks. Assign systems administrators the primary task of administering systems. Assign security operators the primary task of auditing system use. I recommend the exam not delve deeper into specific implementations or tools. One could imagine what those would be, however. Here are examples from FreeBSD; again, these are my thoughts: Use the group functionality and assign privileges as required. (Windows might provide a better example, given the number of groups installed by default and their variety of privileges.) Use sudo to execute commands as another (presumably more powerful) user. Configure system logging through syslog and export logs to one or

more remote, secure logging hosts under the control and review of the security team. Consider enabling process accounting via acct. Also consider implementing Mandatory Access Controls. I do not think an exam like the CISSP should delve as deep as implementations or tools. Staying at the levels of theory/principle and techniques/practices is vendor-neutral, more manageable, and less likely to become obsolete as technologies change. While I may not be happy with all of NIST's principles, they are much more representative of what the CISSP should address. As a bonus, this NIST publication already exists, and the sorts of people who haggle over principles like these tend to gravitate toward documentation from .gov institutions. Furthermore, one of the better CISSP exam prep guides references the older version of SP 800-27: The CISSP Prep Guide: Mastering the CISSP and ISSEP Exams, 2nd Edition, by Ronald L. Krutz and Russell Dean Vines. In fact, the exact chapter mentioning 800-27 principles (albeit the 2001 versions) is online. A Google search of cissp 800-27 only yields 48 hits, meaning not too many people are making the link. Krutz and Vines have, which is a great start. What do you think? https://taosecurity.blogspot.com/2005/08/what-cissp-should-be-today-isaw-new.html Commentary Everyone wants to invent their own guidelines. Only rarely (e.g., MITRE ATT&CK) does it work out for the better!

Answering Penetration Testing Questions Thursday, June 08, 2006 Some of you have written regarding my post on penetration testing. One of you sent the following questions, which I thought I should answer here. Please note that penetration testing is not currently a TaoSecurity service offering, so I'm not trying to be controversial in order to attract business. What do you feel is the most efficient way to determine the scope of a pen test that is appropriate for a given enterprise? Prior to hiring any pen testers, an enterprise should conduct an asset assessment to identify, classify, and prioritize their information resources. The NSA-IAM includes this process. I would then task the pen testers with gaining access to the most sensitive information, as determined by the asset assessment. Per my previous goal (Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge.) one must decide the other variables before hiring a pen testing team. What do you feel is the most efficient way to determine which pen tester(s) to use? First, you must trust the team. You must have confidence (and legal assurances) they will follow the rules you set for them, properly handle sensitive information they collect, and not use information they collect for non-professional purposes. Second, you must select a team that can meet the objectives you set. They should have the knowledge and tools necessary to mirror the threat you expect to face. I will write more on this later. Third, I would rely on referrals and check all references a team provides. Do you feel there is any significant value in having multiple third parties perform a pen test? This issue reminds me of the rules requiring changing of financial

auditors on a periodic basis. I believe it is a good idea to conduct annual pen tests, with one team in year one and a second team in year two. At the very least you can have two experiences from which to draw upon when deciding who should return for year three. Have you had any significant positive/negative experiences with specific pen testers? I once monitored a client who hired a "pen tester" to assess the client's network. One weekend while monitoring this client, I saw someone using a cable modem run Nmap against my client. The next Monday my client wanted to know why I hadn't reported seeing the "pen test". I told my client I didn't consider a Nmap scan to be a "pen test". I soon learned the client had paid something like $5000 for that scan. Buyer beware! Do you have any additional recommendations as to how to choose a pen tester? Just today I came across what looks like the industry's "first objective technical grading system for hackers and penetration testers" -- at least according to SensePost. This is really exciting, I think. They describe their Combat Grading system this way: Participants are tasked to capture the flag in a series of exercises carefully designed to test the depth and the breadth of their skill in various diverse aspects of computer hacking. Around 15 exercises are completed over the course of two days, after which each participant is awarded a grade reflecting their scores and relative skill levels in each of the areas tested. Each exercise is completely technical in nature. This sounds very promising. Do you have any literature that you can recommend in regard to pen testing? I have a few books nearby, namely Penetration Testing and Network Defense (not read yet) and Hack I.T. (liked it, but 4 years old). The main Hacking Exposed series discusses vulnerability assessment, which gets you

halfway through a pen test. If I had the time and money I would consider attending SensePost training, which looks very well organized and stratified. They are being offered at Black Hat Training, which as usual seems very expensive. Good, but expensive. https://taosecurity.blogspot.com/2006/06/answering-penetrationtesting.html Commentary This advice is still fairly useful today, although I am not sure if SensePost is still training.

No Shortcuts to Security Knowledge Tuesday, November 21, 2006 Today I received a curious email. At first I thought it was spam, since the subject line was "RE: Help!", and I don't send emails with that subject line. Here is an excerpt: I cannot afford nor have the time to take a full collage [sic] course on the topic of network security but I would like to be as knowlageable [sic] about it as yourself and be able to protect my computer and others regarding this matter. If I was willing to pay you would you take the time to teach me what you know and/or point me in the direction I would need to learn what you know about network security? Please advise what course I would need to take to accomplish your skill of network security? In my opinion, it seems like this question seeks to learn some sort of "hidden truth" that I might possess, and acquire it in record time. The reality is that there are really no shortcuts to learning as complex a topic as digital security. I have been professionally involved with this topic for almost ten years, yet I consider myself halfway to the level of skill and proficiency I would prefer to possess. In another ten years I'll probably still be halfway there, since the threats and vulnerabilities and assets will have continued to evolve! If you want to "know what I know," a good place to start is by reading one or more of my books. I recommend starting with Tao, then continuing with Extrusion and finishing with Forensics. Chapter 13 from Tao explicitly addresses the issue of security analyst training and development. My company research page lists over a dozen documents I've written, and this blog is a record of almost four years of thoughts on digital security. For books outside of my own, my top ten books of the last ten years contain some of the best books on digital security. My reading page shows

books I recommend in five categories. I also show the books waiting to be read on my shelf, but I wouldn't consider an appearance there to be an endorsement unless I offer a favorable Amazon.com review. Please note my recommended lists do not include books from 2006 (and maybe 2005), but I plan to write a "best of" list at the end of this year. I'll update the recommendations lists if I have time. In addition to reading, I highly recommend becoming familiar with the majority of the security tools listed by Fyodor. It also helps to specialize (at least in the beginning) in one of the five categories I show on my reading page. I tend to split my time between Weapons and Tactics and Telecommunications, although I plan to continue developing my Scripting and Programming skills. I do some System Administration by building and operating network sensors and supporting systems (like databases), but I am not the sort of sys admin who supports users. I try to stay out of devoted Management and Policy work, although I try not to be ignorant. I could probably say a lot more on this topic, but the bottom line is that there are no shortcuts to security knowledge. I hope this free post has been helpful. https://taosecurity.blogspot.com/2006/11/no-shortcuts-to-securityknowledge.html Commentary There are still no shortcuts, but you can apply a concept popularized by Professor Jigoro Kano (1860-1938), founder of judo, called seiryoku zenyo (精力善用), which his institution the Kodokan describes as “maximum efficient use of energy...to fully utilise one's spiritual and physical energies to realise an intended purpose.” I like the simple translation of the characters as “good use of energy” as cited here: https://tomikiaikido.blogspot.com/2010/03/seiryoku-zenyo-in-judo.html You can practice seiryoku zenyo in security by learning sound methods from reliable sources, and then applying them to your own environment. In

the modern age, everyone can be a system administrator -- even if you only own a cell phone!

Starting Out in Digital Security Wednesday, December 27, 2006 Today I received an email which said in part: I'm brand new to the IT Security world, and I figure you'd be a great person to get career advice from. I'm 30 and in the process of making a career change from executive recruiting to IT Security. I'm enrolled in DeVry's CIS program, and my emphasis will be in either Computer Forensics or Information Systems Security. My question is, knowing that even entry-level IT jobs require some kind of IT experience, how does someone such as myself, who has no prior experience, break into this exciting industry? My plan is to earn some of the basic certifications by the time I graduate (A+, Network+, Security+). What else should I be doing? What introductory books and resources can you recommend? I thought I'd discussed this sort of question before, but all I found was my post on No Shortcuts to Security Knowledge and Thoughts on Military Service. I believe I cover this topic in chapter 13 of Tao. To those who are also interested in this question, I recommend reading both of those posts first and then returning to this post. I'll do my best to provide some additional useful advice here. Here are seven ways you can make yourself more attractive to securityminded employers. Represent yourself authentically. It's tough when starting out to recognize the size of the digital security world. It's taken me nearly ten years to grasp the scope of the field. You'll be successful if you can clearly identify just what you (think) you know, and what you definitely do not. You will not do anyone favors if you claim to be even somewhat proficient in all or nearly all aspects of digital security. It's extremely important to want to work in security for love of the field, and not the potential paycheck. Stop using Microsoft Windows as your primary desktop. This is not

an anti-Microsoft rant. The reality is the vast majority of the world uses Windows. When you stop using Windows, you move yourself into a smaller group that needs to think and troubleshoot. Some see this as a problem, while others see it as a learning opportunity. If you are completely new, start with one of the easy Linux distros. As you feel adventurous try one of the BSDs. (Mac OS X doesn't really count as a non-Windows platform for the purposes of this point.) This does not mean you will never use Windows again. I dualboot Windows and FreeBSD on my laptop. Attend meetings of local security groups. Ideally you would have a group like NoVA Sec nearby, but you're more likely to have an ISSA chapter in your city. In either case, attend some meetings. Get immersed in the discussions that occur in those settings. Ask questions. Read books and subscribe to free magazines. You should start with the books on my Listmania Lists. Subscribe to Information Security, SC Magazine, NWC, and Cisco's IP Journal. I wouldn't bother with 2600. It costs money and more often than not you'll read about "hacking" point of sale terminals and the like. Create a home lab. No real security "pro" has only a single laptop/desktop connected to a DSL/cable modem. Most every security person I know maintains some sort of lab. If you are resource-constrained, install VMware Server and build a small virtual lab. Experiment with as many operating systems as you can. Familiarize yourself with open source security tools. Fyodor's Sectools.org is a good starting point. As you meet people and read, you'll learn of new techniques and tools to try. Practice security wherever you are, and leverage that experience. So many people are in security positions but do not recognize it. If you are a network administrator, you have security potential and responsibilities. If you are a system administrator, you have a platform to secure. If you are a developer, you should practice secure coding. If you set up a home lab, you need to operate it securely. It is both a blessing and a curse that anyone with a computing device is an administrator and a security practitioner. Whatever your background, consider how it might apply to security. For example,

former software developers might become involved in application testing and/or source code review, instead of securing carrier networks. Once you follow this advice, where can you work? A search for jobs with "network security" at Monster.com or similar job sites reveals plenty of opportunities. If you are just starting out, I recommend getting a job where you are a cog in the machine and not the whole machine. In other words, you are probably setting yourself up for failure if you land a job as an organization's sole security person -- and you are brand new. You won't know where to start and you'll have no one on site to mentor you. It's best to pick a niche first, know that niche well, and then branch out as time passes. It also pays to know where you (want to) fit in the security community. I appreciate anyone else's advice for this question-asker. https://taosecurity.blogspot.com/2006/12/starting-out-in-digitalsecurity.html Commentary This is still good advice, but with the rise of cloud computing, you don’t need much of a physical lab at home. The more comfortable you become with cloud resources, or at the very least virtualization and containers, the better.

Reading Tips Monday, January 01, 2007 Happy New Year to everyone. I've received some feedback on my 1720th post, Favorite Books, mainly questions about my ability to read so many books in one year. I have no secret knowledge or techniques, but I would like to share what works for me. First, I think it's important to recognize my situation. Some of you will have more time available, and others will have less. I am married with two small children. I run my own company (TaoSecurity). I do not have a daily commute although I do travel out-of-state several times each month. I do not watch much TV, and the TV I do watch is recorded on my TiVo. Second, the advice I give assumes you want to make the most of your reading time. You want to read as many books as possible while retaining as much as possible. You don't want to use any gimmicks like speed reading, etc. (I do not use any of those "techniques." I don't think tricks like reading down the center of a page work very well for tech books, especially.) Make a plan. Set some goals. Do you want to read one book per week, per month, per year? If you decide to just "read" you'll be less efficient. I try to read an average of one book on my reading list per week. Read good books that interest you. One of the emails I received said "I find it difficult to read through a lot of books (especially on security due to dryness/boring) and wish there was a way I can fight through it more easily." There is no way to quickly read through boring books. If you run into a book in your reading stack that bores you, move it aside, fast. I fell into that trap a few times last year. You'll see huge gaps in my reviews where I got stuck looking at a boring book. I was so unmotivated I stopped reading rather than push the book aside. Read at least a few pages every day. Even if you only read two pages per day, you'll read two average size books per year. I sometimes fall into the

trap of only wanting to read in "big chunks," where I won't read if I don't have a free 30 minutes or so. Too many days of waiting for big chunks of free time turn into a week, then a month, and then you've read nothing all year! Additionally, you may find it helpful to "surge" every once in a while. Sometimes I will read several books in a row over the course of a few days. Be careful with this approach -- it's easy to burn out fast and not want to start reading again. Make time to read. You'll have more success if you think about the time of day you hope to read. Sometimes I wake up much earlier than my family and read. Other times I stay up late after they are asleep. Since I work for myself, sometimes I use part of my work day to read. If you are a security or technology professional, reading should be part of your work day. I have no idea how management can expect tech operators to stay current and effective without expanding our knowledge. Every company should have a budget for a tech library for its IT staff and recognition that spending some portion of the work day reading (30 minutes would be good) is a cost-effective way to build a forward-thinking tech force. Managers who discourage reading are idiots. Read interactively. When I read a tech book, I use a template like the one pictured at right. It's basically a ruler, but I've had it since I studied architecture in high school. (That's correct -- back then we were just starting to use Apple computers for CAD, so most of the time we drew everything by hand!) When I read something interesting, I underline it. I haven't used highlighters since college; I think they are messy, they often fade, and they don't reproduce well if you want to photocopy or scan a page. I make notes in the margins. I draw small triangles next to the most important points, and triangles with check marks inside for especially significant ideas. When I finish a book I thumb through it and look at my triangles to refresh my memory. When possible I also read near my laptop so I can visit URLs mentioned in the book. I also take notes on a separate pad that I use to produce my book reviews. If you have any thoughts, please share them as comments. https://taosecurity.blogspot.com/2007/01/reading-tips.html

Commentary Reading is eternal, throughout the history of writing (of course!) I might amend my list a bit due to the prevalence of electronic publications. Even then, I am constantly highlighting on my old Kindle. I don’t add notes though; the UI is too cumbersome.

Security in the Real World Monday, January 08, 2007 I received the following from a student in one of my classes. He is asking for help dealing with security issues. He is trying to perform what he calls an "IDS/IPS policy review," which is a tuning exercise. I will apply some comments inline and some final thoughts at the end. If you recall, I was in one of your NSO classes last year. At the end of the day the only place I am able to use everything I learned is at home. This is an example of a security person knowing what should be done but unable to execute in the real world. This is a lesson for all prevention fanboys. You know the type -- they think 100% prevention is possible. In the real world, "business realities" usually interfere. As you are aware with other corporate environments, our company goes by Gartner rating on products and ends up buying a technology where you don't get any kind of data, but just an alert name. So that is a pain within itself. Here we see a security analyst who has been exposed to my Network Security Monitoring ideas (alert, full content, session, and statistical data), but is now stuck in an alert-centric world. I have turned down jobs that asked me to leave my NSM sources behind to supervise alert ticketing systems. No thanks! There is this issue that I have been running into and thought maybe you can help me, if you are free. I work for this pretty large organization with 42000 users in 15 states... in a dynamic ever-changing environment, what is the best route for policy review? We have two different IDS technologies across the company. The IDS/IPS policy review is really for turning off the signatures that we don't need to know about and cutting down on alerts, so that alert

monitoring becomes easier. Since we use ArcSight for correlation, its [sic] easier to look for our interesting traffic. Wait a minute -- I thought ArcSight and other SIM/SEM/SIEM/voodoo was supposed to solve this problem? Why disable anything if ArcSight is supposed to deal with it? Right now, we are dealing with just a very high volume of alerts, there is no way we are going to be able to catch anything. In other small environments, I have been able to easily determine what servers we have/havent and turn on only those that are needed. For example, if we are not running frontpage, we can turn off all frontpage alerts. In our environment, it will be difficult to determine that and often times [sic], we have no idea what changes have taken place. This is an example of a separation between the security team and the infrastructure team. This poor analyst is trying to defend something his group doesn't understand. Therefore, our policy review needs to cover all the missing pieces as well. By that, I mean we have to take into consideration the lack of cooperation across the board from other teams, when we disable or enable alerts. Here we see the effects of lack of cooperation between security and infrastructure groups. When I show people how I build my own systems to collect NSM data, some say "Why bother? Just check NetFlow from your routers, or the firewall logs, or the router logs, etc..." In many places the infrastructure team won't let the security team configure or access that data. (1) Going by the firewall rule - Feedback from my team is that we wont [sic] know about the firewall rule changes, if any change were to occur, hence we can't do it. Another is they trust the IDS technology too much and they say "well you are recommending turning off a high level alert "There is a reason why vendor rates [sic] it high". It seems the infrastructure team trusts the security vendor more than their own security group.

(2) Going by applications that are running on an environment - This has proven to be even more difficult, since there is no update what has been installed and not. Again, you can't monitor or even secure what you don't understand. (3) Third approach for external IPS - Choose those that aren't triggered in the past three months, review those and put them in block mode - In case something were to be open by the firewall team, they will identify it being blocked by something, it will be brought to our attention then we can unblock it after a discussion. This is an interesting idea. Run the IDS in monitoring mode. Anything that doesn't trigger gets set to block mode when the IDS becomes an IPS! If anyone complains, then react. None of this has been approved thus far. as you know I used to work for [a mutual friend] and we had small customers like small banks and so forth. With them the policy review was much easier. Smaller sites are less complex, and therefore more understandable. Let's pick one example. I recommended at one point that we can disable all mysql activity on our external IDSs, since they are blocked by the firewall anyway, so that we dont [sic] have to see thousands and thousands of scans on our network on port 1434 all the time. Even that didn't get approved. The feedback for that was IDS blocking the alerts can take some load off the firewall. So, this is a complicated topic. I appreciate my former student permitting me to post this anonymously. Here's what I recommend. Perform your own asset inventory. You first have to figure out what you're defending. There are different ways to do this. You could analyze a week's worth of session data to see what's active. You could look for the results of intruder scans or other recon to find live hosts and services. You could conduct your own scan. You could run something like PADS for a week. In the end, create a database or spreadsheet showing all your assets,

their services, applications, and OS if possible. Determine asset security policy and baselines. What should the assets you've inventoried be doing? Are they servers only accepting requests from clients? Do the assets act as clients and servers? Only clients? What is the appropriate usage for these systems based on policy? Create policy-based detection mechanisms. Once you know what you're protecting and how they behave, devise ways to detect deviations from these norms. Maybe the best ways involve some of your existing IDS/IPS mechanisms -- maybe not. Tune stock IDS/IPS alerts. I described how I tune Snort for Sys Admin magazine. I often deploy a nearly full rule set for a short time (one or two days) and then pare down the obviously unhelpful alerts. Different strategies apply. You can completely disable alerts. You can threshold alerts to reduce their frequency. You can (with systems like Sguil) let alerts fire but send them only to the database. I use all three methods. Exercise your options. Once you have a system you think is appropriate, get a third party to test your setup. Let them deploy a dummy target and see if you detect them abusing it. Try client-side and server-side exploitation scenarios. Did you prevent, detect, and/or respond to the attacks? Do you have the data you need to make decisions? Tweak your approach and consider augmenting the data you collect with third party tools if necessary. I hope this is helpful. Do you have any suggestions? https://taosecurity.blogspot.com/2007/01/security-in-real-world.html Commentary This advice is still good, but I would take a different approach concerning alert tuning if you operate in a more mature security team. Rather than loading hundreds or thousands of alerts provided by a rule vendor, start with zero commodity alerts. Pick 10 or so conditions that you would not want to see occur in your environment, and write custom rules for those events. Test them in your environment, and tune them until they perform as you wish. If you can handle the load from those alerts, pick 10 more conditions of

concern. Rinse and repeat.

What Should the Feds Do Sunday, April 22, 2007 Recently I discussed Federal digital security in Initial Thoughts on Digital Security Hearing. Some might think it's easy for me to critique the Feds but difficult to propose solutions. I thought I would try offering a few ideas, should I be called to testify on proposed remedies. For a long-term approach, I recommend the steps offered in Security Operations Fundamentals. Those are operational steps to be implemented on a site-by-site basis, and completing all of them across the Federal government would probably take a decade. In the short term (over the next 12 months) I recommend the following. These ideas are based on the plan the Air Force implemented over fifteen years ago, partially documented in Network Security Monitoring History along with more recent initiatives. Identify all Federal networks and points of connectivity to the Internet. This step should already be underway, along with the next one, as part of OMB IPv6 initiative. The Feds must recognize the scope and nature of the network they want to protect. This process must not be static. It must be dynamic and ongoing. Something like Lumeta should always be measuring the nature of the Federal network. Identify all Federal computing resources. If you weren't laughing with step 1, you're probably laughing now. However, how can anyone pretend to protect Federal information if the systems that process that data are unknown? This step should also be underway as part of the IPv6 work. Like network discovery, device discovery must be dynamic and automated. At the very least passive discovery systems should be continuously taking inventory of Federal systems. To the extent active discovery can be permitted, those means should also be implemented. Please realize steps 1 and 2 are not the same as FISMA, which is static and only repeated every three years for known systems.

Project friendly forces. You can tell these steps are becoming progressively difficult and intrusive into agency operations. With this step, I recommend third party government agents, perhaps operated by OMB for unclassified networks and a combination of DoD and ODNI for classified networks, "patrol" friendly networks. Perhaps they operate independent systems on various Federal networks, conducting random reconnaissance and audit activities to discover malicious parties. The idea is to get someone else besides intruders and their victims into the fight at these sites, so an independent, neutral third party can begin to assess the state of enterprise security. The Air Force calls this friendly force projection, which is a common term but they are performing it now on AF networks. This step is important because it will unearth intrusions that agencies can't find or don't want to reveal. It is imperative that end users, administrators, and managers become separated from the decision on reporting incidents. Right now incident reporting resembles status reports in the Soviet Union. "Everything is fine, production is exceeding quotas, nothing to see here." The illusion is only shattered by whistleblowers, lawsuits, or reporters. Independent, ground-truth reporting will come from this step and from centralized monitoring (below). Build a Federal Incident Response Team. FIRT is a lousy name for this group, but there should be a pool of supreme technical skill available to all Federal enterprises. Each agency should also have an IRT, but they should be able to call upon FIRT for advice, information sharing, and surge support. Implement centralized monitoring at all agencies. All agencies should have a single centralized monitoring unit. Agents from step three should work with these network security monitoring services to improve situational awareness. Smaller agencies should pool resources as necessary. All network connectivity points identified in step 1 should be monitored. Create the National Digital Security Board. As I wrote previously: The NDSB should investigate intrusions disclosed by companies as a result of existing legislation. Like the NTSB, the NDSB would probably need legislation to authorize these investigations.

The NDSB should also investigate intrusions found by friendly force projection and centralized monitoring. None of these steps are easy. However, there appears to be support for some of them. This is essentially the formula the Air Force adopted in 1992, with some of the steps (like friendly force projection) being adopted only recently. I appreciate any comments on these ideas. Please keep in mind these are 30 minutes worth of thoughts written while waiting for a plane. Also -- if you read this blog at taosecurity.blogspot.com, you'll see a new theme. Blogger "upgraded" me last night, removing my old theme and customizations. I think most people use RSS anyway, so the change has no impact. I like the availability of archives on the right side now. https://taosecurity.blogspot.com/2007/04/what-should-feds-do.html Commentary I wrote this post in 2007, and I have been happy to see the US federal government slowly adopt most of these suggestions over the years.

Why Digital Security? Wednesday, June 13, 2007 Today I received the following email: Hi Richard, (Sorry for my bad English, i speak French...) I'm one of your blog readers and i have just a little question about your (Ex) job, Consultant in IT security... I'm very interested by IT security and i want to get a degree in this. In France, we have to write "motivation letter" to show why we are interested by the diploma. That's why i write to you to know a few things that you do in your job, what is interesting and what is boring ?? I figured I would say a few words here and then let all of you blog readers post your ideas too. Likes: Constant learning Defending victims from attackers -- some kind of desire for justice Community that values learning (but not necessarily education -there's a difference) Working with new technology Financially rewarding for those with valuable skills Dislikes: Constantly changing landscape requires specialization and potential

loss of big picture Most attackers remain at large, meaning as a whole "security" never improves Learning is being increasingly rated by the string of letters after one's name Family system administration, especially for user applications on Windows that I have never seen; "But you work with computers!" Charlatans, especially with letters and/or security clearances, rotating around the Beltway making lots of money without delivering value beyond a "filled billet" What do you think? https://taosecurity.blogspot.com/2007/06/why-digital-security.html Commentary I did not remember writing this post and it’s a useful window into my mindset in mid-2007. I was about to start at General Electric in July 2007, which was an amazing experience.

US Needs Cyber NORAD Thursday, September 13, 2007 In addition to the previous Country v China stories I've been posting, consider the following excerpts. First, from China’s cyber army is preparing to march on America, says Pentagon: Jim Melnick, a recently retired Pentagon computer network analyst, told The Times that the Chinese military holds hacking competitions to identify and recruit talented members for its cyber army. He described a competition held two years ago in Sichuan province, southwest China. The winner now uses a cyber nom de guerre, Wicked Rose. He went on to set up a hacking business that penetrated computers at a defence contractor for US aerospace. Mr Melnick said that the PLA probably outsourced its hacking efforts to such individuals. “These guys are very good,” he said. “We don’t know for sure that Wicked Rose and people like him work for the PLA. But it seems logical. And it also allows the Chinese leadership to have plausible deniability.” On one side we have the Chinese military organizing hackfests and sending work to the best. On the other side we have defense contractors often selected as the lowest bidder. Worse, when those contractors are actually clueful and resourceful (like Shawn Carpenter), they are fired. From Cyberspies Target Silent Victims: The U.S. Department of Defense confirmed last week that cyberspies have been sifting through some government computer systems. What wasn't said: The same spies may have been combing through the computer systems of major U.S. defense contractors for more than a year. "There's been a massive, broad and successful series of attacks targeting the private sector," says Alan Paller, director of the SANS Institute, a Bethesda, Md.-based organization that hosts a response

center for companies with cybersecurity crises. "No one will talk about it, but companies are creating a frenzy trying to stop it..." None of the companies have publicly reported data breaches, though many have informed the Department of Defense. "Reporting an event like this would kill your stock price," says a source close to the military contractor industry who asked not to be named... When Carpenter warned government officials in the Army and the FBI of his findings in 2004, he was fired. Sandia officials declined to comment on any subject relating to the Titan Rain hackings. Carpenter says his former employer's attempts to keep the incident quiet are typical. In China as Victim I noted the following: Lou said the electronic espionage against China has met with success. It therefore needs to be addressed by President Hu Jintao's government, he added, with additional investment in computer security and perhaps formation of a unified information security bureau. That's China saying they need a high-level, concentrated group to protect Chinese assets. On what does the US rely? Apparently, the Department of Homeland Security and an assistant secretary for cyber-security and telecommunications. Let's find this person on the DHS organizational chart. Missed the assistant secretary for cyber-security and telecommunications? That's because he's not even the top chart. He's working for the Under Secretary for National Protection Programs, whose peers include an Under Secretary for Management and an Under Secretary for Science and Technology. Seriously. The more I think about it, the more of a disgrace this is. Consider: every single government agency uses computers. Not only that, every single US company uses computers. (If they don't, I doubt they qualify as a company!) We often hear that the private sector should protect itself, since the

private sector owns most of the country's critical infrastructure. Using the same reasoning, I guess that's the reason why Ford defends the airspace over Dearborn, MI; Google protects Mountain View, CA, and so on. No? (By the way, I know that the US through the FAA "owns" the airspace over the country, but it's literally not the airspace itself that matters; it's what is underneath -- people, buildings, resources, and so on.) I plan to develop this thought further, but for now I take comfort in knowing the Air Force Cyber Command is coming. Remember the Air Force started as a small Aeronautical Division to take "charge of all matters pertaining to military ballooning, air machines and all kindred subjects" on 1 August 1907. 100 years later, Cyber Command is coming. Hopefully a "Cyber NORAD" might follow. Remember, monitor first. We might eventually get a new Cyber Force focused solely on defending the digital realm. Stay tuned. https://taosecurity.blogspot.com/2007/09/us-needs-cyber-norad.html Commentary This post referenced my series of “China v ‘country X’” posts that I wrote throughout 2007 as the world went through its second wake-up call concerning Chinese intrusions. The first happened at the beginning of the decade and the third happened around the APT1 report of 2013. I include those stories in the chapters on China and the APT elsewhere in these volumes.

Controls Are Not the Solution to Our Problem Monday, November 26, 2007 If you recognize the inspiration for this post title and graphic, you'll understand my ultimate goal. If not, let me start by saying this post is an expansion of ideas presented in a previous post with the succinct and catchy title Control-Compliant vs Field-Assessed Security. In brief, too many organizations, regulators, and government agencies waste precious time and resources devising and auditing "controls," regardless of the effect these controls have or do not have on security. They are far too input-centric; they should become more output-aware. They obsess over recording conditions they believe may be helpful while remaining ignorant of the "score of the game." They practice management by belief and disregard management by fact. Let me provide a few examples from one of the canonical texts used by the control-compliant crowd: NIST Special Publication 800-53: Recommended Security Controls for Federal Information Systems (.pdf). The following is an example of a control, taken from page 140. SI-3 MALICIOUS CODE PROTECTION The information system implements malicious code protection. Control: Supplemental Guidance: The organization employs malicious code protection mechanisms at critical information system entry and exit points (e.g., firewalls, electronic mail servers, web servers, proxy servers, remote-access servers) and at workstations, servers, or mobile computing devices on the network. The organization uses the malicious code protection mechanisms to detect and eradicate malicious code (e.g., viruses, worms, Trojan horses, spyware) transported: (i) by electronic mail, electronic mail attachments, Internet accesses, removable media (e.g., USB devices, diskettes or compact

disks), or other common means; or (ii) by exploiting information system vulnerabilities. The organization updates malicious code protection mechanisms (including the latest virus definitions) whenever new releases are available in accordance with organizational configuration management policy and procedures. The organization considers using malicious code protection software products from multiple vendors (e.g., using one vendor for boundary devices and servers and another vendor for workstations). The organization also considers the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the information system. NIST Special Publication 800-83 provides guidance on implementing malicious code protection. Control Enhancements: (1) The organization centrally manages malicious code protection mechanisms. (2) The information system automatically updates malicious code protection mechanisms. At first read one might reasonably respond by saying "What's wrong with that? This control advocates implementing anti-virus and related antimalware software." Think more clearly about this issue and several problems appear. Adding anti-virus products can introduce additional vulnerabilities to systems which might not have exposed themselves without running antivirus. Consider my post Example of Security Product Introducing Vulnerabilities if you need examples. In short, add anti-virus, be compromised. Achieving compliance may cost more than potential damage. How many times have you heard a Unix administrator complain that he/she has to purchase an anti-virus product for his/her Unix server simply to be compliant with a control like this? The potential for a Unix server (not Mac OS X) to be damaged by a user opening an email through a client while logged on to the server (a very popular exploitation vector on a Windows XP box) is practically nil.

Does this actually work? This is the question that no one asks. Does it really matter if your system is running anti-virus software? Did you know that intruders (especially high-end ones most likely to selectively, stealthily target the very .gov and .mil systems required to be compliant with this control) test their malware against a battery of anti-virus products to ensure their code wins? Are weekly updates superior to daily updates? Daily to hourly? The purpose of this post is to tentatively propose an alternative approach. I called this "field-assessed" in contrast to "control-compliant." Some people prefer the term "results-based." Whatever you call it, the idea is to direct attention away from inputs and devote more energy to outputs. As far as mandating inputs (like every device must run anti-virus), I say that is a waste of time and resources. I recommend taking measurements to determine your enterprise "score of the game," and use that information to decide what you need to do differently. I'm not suggesting abandoning efforts to prevent intrusions (i.e., "inputs.") Rather, don't think your security responsibilities end when the bottle is broken against the bow of the ship and it slides into the sea. You've got to keep watching to see if it sinks, if pirates attack, how the lifeboats handle rough seas, and so forth. These are a few ideas. 1. Standard client build client-side survival test. Create multiple sacrificial systems with your standard build. Deploy a client-side testing solution on them, like a honeyclient. (See The Sting for a recent story.) Vary your defensive posture. Measure how long it takes for your standard build to be compromised by in-the-wild Web sites, spam, and other communications with the outside world. 2. Standard client build server-side survival test. Create multiple sacrificial systems with your standard build. Deploy them as a honeynet. Vary your defensive posture. Measure how long it takes for your standard build to be compromised by malicious external traffic from the outside world -- or better yet -- from your internal network.

3. Standard client build client-side penetration test. Create multiple sacrificial systems with your standard build. Conduct my recommendation penetration testing activities and time the result. 4. Standard client build server-side penetration test. Repeat number 3 with a server-side flavor. 5. Standard server build server-side penetration test. Repeat number 3 against your server build with a server-side flavor. I hope you don't have users operating servers as if they were clients (i.e., browsing the Web, reading email, and so forth.) If you do, repeat this step and do a client-side pen test too. 6. Deploy low-interaction honeynets and sinkhole routers in your internal network. These low-interaction systems provide a means to get some indications of what might be happening inside your network. If you think deploying these on the external network might reveal indications of targeted attacks, try that. (I doubt it will be that useful due to the overall attack noise, but who knows?) 7. Conduct automated, sampled client host integrity assessments. Select a statistically valid subset of your clients and check them using multiple automated tools (malware/rootkit/etc. checkers) for indications of compromise. 8. Conduct automated, sampled server host integrity assessments. Self-explanatory. 9. Conduct manual, sampled client host integrity assessments. These are deep-dives of individual systems. You can think of it as an incident response where you have not had indication of an incident yet. Remote IR tools can be helpful here. If you are really hard-core and you have the time, resources, and cooperation, do offline analysis of the hard drive. 10. Conduct manual, sampled server host integrity assessments. Selfexplanatory. 11. Conduct automated, sampled network host activity assessments. I

questioned adding this step here, since you should probably always be doing this. Sometimes it can be difficult to find the time to review the results, however automated the data collection. The idea is to let your NSM system see if any of the traffic it sees is out of the ordinary based on algorithms you provide. 13. Conduct manual, sampled network host activity assessments. This method is more likely to produce results. Here a skilled analyst performs deep individual analysis of traffic on a sample of machines (client and server, separately) to see if any indications of compromise appear. In all of these cases, trend your measurements over time to see if you see improvements when you alter an input. I know some of you might complain that you can't expect to have consistent output when the threat landscape is constantly changing. I really don't care, and neither does your CEO or manager! I offer two recommendations: Remember Andy Jaquith's criteria for good metrics, simplified here. Measure consistently. Make them cheap to measure. (Sorry Andy, my manual tests violate this!) Use compound metrics. Be actionable. Don't slip into thinking of inputs. Don't measure how many hosts are running anti-virus. We want to measure outputs. We are not proposing new controls. Controls are not the solution to our problem. Controls are the problem. They divert too much time, resources, and attention from endeavors which do make a difference. If the indications I am receiving from readers and friends are true, the ideas in this post are gaining traction. Do you

have other ideas? https://taosecurity.blogspot.com/2007/11/controls-are-not-solution-toour.html Commentary Controls and input-centric thinking are so difficult to escape. They are the black hole of security. Everyone falls toward them, and if you cross their event horizon your security posture is doomed. Think in terms of outcomes and watch your program thrive.

Answering Reader Questions Friday, May 16, 2008 Thanks to the patient readers who submitted questions while I've been on the road for work. I'd like to post a few questions here, along with my answers. Identities of those asking questions have been preserved unless noted otherwise, as is my policy. How does something like Sguil relate to something like OSSIM? I find that I would love to use Sguil for analysis, but it doesn’t deal with HIDS, and I feel if I run both on the same network, I am overlapping a bit of things, as well as using a bit of resources redundantly? I see Sguil and OSSIM as different products. Sguil is primarily (and currently) an analyst console for network security monitoring. OSSIM (from what I have seen, and from what I have heard speaking directly with developers) is more of an interface to a variety of open source tools. That sounds similar but it is somewhat different. I don't see a reason why you have to choose between the two. I think it is important to realize that although OSSIM has the term "SIM" in the name, it's really not a SIM. Most people consider a SIM to be a system that interprets logs from a variety of sources, correlates or otherwise analyzes them, and presents more intelligence information to the analyst. OSSIM doesn't really accept that much from other log sources; it relies on output from other open source tools. I am sure I am going to hear from a bunch of satisfied OSSIM users who claim I am ignorant, but my group decided not to use OSSIM because it was less SIM than we needed and too much of a portal to open source applications. If you want that, it's still helpful. In your book you stated that Sguil is really used for real-time monitoring, but what happens when you are a small company, and don’t employ 24x7 staff? Does the analyst come in the next morning and work thru alerts that come thru the previous evening?

That is one model. In another model, you set Sguil to auto-categorize all alerts, and then query for something of interest. Sguil was originally built for a 24x7 SOC environment, but you don't necessarily have to use it that way. I have been [in a new job as an analyst at a] MSSP for 3-weeks and have formed an opinion that slightly mirrors your points about MSSP's being ticket-shops; in my opinion, MSSP, and specifically the division that I am in is like a glorified and/or specialized help/service desk. We get tickets, we fix things, we close tickets, repeat, etc. This is like a help desk except instead of dealing with say desktops and servers, we are dealing with firewalls and IDS'. I had a conversation with a friend who helped land me the job this afternoon and one of the things that he pointed out to me was that I would have to get used to the fact that our customers (government and commercial) are not interested in situational awareness or tactical traffic analyses, or NSM in general. In fact, to my company NSM is a product by [insert vendor name here]. :) This is funny, but true. Please don't get the impression that I am complaining, I willingly chose to work for this company and am happy to have the opportunity to learn new technologies (different firewalls, different IDS') from a different perspective and within many disparate networks. It's just that I have come to the conclusion that all Information Security is NOT Information Warfare and am not sure how to cope with this. I am a packet-head and an analyst at heart, but as I have been told, our customer's do not place the same premium on understanding their traffic that I do, nor does my company by that extension because it is not a salable service. Wow, doesn't that question just punch you in the gut? I feel your pain. MSSPs exist to make money, and differentiation by the real issue -- detecting and ejecting intruders -- doesn't appear on the balance sheet. If anyone disagrees, re-read MSSPs: What Really Matters and read near the bottom: As Bamm Visscher asks, "Is your MSSP just a 'Security Device Management Provider'?" (SDMP?) I have anecdotal evidence from a variety of sources that many companies

are taking in-house some of the security services they previously outsourced. Some are doing so because they are getting little to no value for their MSSP dollar. Others realize that almost all of the MSSPs are just SDMPs, and the customer demands someone who has a better chance of understanding their business and actually improving security. Those who retain MSSPs are usually checking PCI or other regulatory boxes or not clued in to the fact most MSSPs are terrible. A very small minority is happy with their MSSP, and I can probably name the company or two providing the service. (Please don't ask for their names.) Some customers are hoping everything ends up in the cloud anyway, so security becomes someone else's problem! (Sorry!) To specifically address your concerns -- I would do the best you can with your situation, but if you decide you really aren't happy, I would look for alternatives. Either find a MSSP that operates how you would like it to, or find a company or agency with a good in-house operation. Now that you've seen how a ticket shop operates it's easy to identify one in the future. Do you know if there has been any progress with FreeBSD 7.0 in coupling up Snort inline with a bridge-mode FreeBSD machine? I think that this would be a match made in heaven. The last time I did research on this, it wasn't yet possible because the kernel can't handle divert sockets. Sorry, I have not tried this recently. Are you handling AV issues? I wanted to know if you had tied that into your IR plan and any lessons learned you might be able to share. Right now our AV is handled by the systems team but when they get an alert "IF" they look at it they typically re-run a scan or maybe some spyware tools and call it good, no traffic monitoring, no application base lining, typically my team will come along after the fact when we see traffic that falls out of spec and question what's happened recently on the box. I have lobbied to now pull this into my team (Network Ops and Security), increase headcount, and I have an idea on how to handle it but wanted to see if you've already dealt with it.

Great question. Ideally antivirus is integrated into an overall Security Operations Center, since AV is both a detection and containment mechanism. However, AV often seems to be run by separate groups (a dedicated AV team, or the end user desktop team, or another batch of people). I recommend integrating access to the AV console into your own processes. Either formally establish a process to involve your incident responders when notified by the AV team of a situation they realize is problematic, or offer support when you observe troublesome behavior on the AV console. Preferably the AV team escalates suspected compromises to the IRT, but you may have to be a little more aggressive if you want to compensate for lack of cooperation between the teams. https://taosecurity.blogspot.com/2008/05/answering-readerquestions.html Commentary Many of these questions could have been written yesterday!

Getting the Job Done Sunday, August 17, 2008 As an Air Force Academy cadet I was taught a training philosophy for developing subordinates. It used a framework of Expectations - Skills - Feedback - Consequences Growth. This model appears in documents like the AFOATS Training Guide. In that material, and in my training, I was taught that any problem a team member might encounter could be summarized as a skill problem or a will problem. In the years since I learned those terms, and especially while working in the corporate sector, I've learned those two limitations are definitely not enough to describe challenges to getting the job done. I'd like to flesh out the model here. The four challenges to getting the job done can be summarized thus: Will problem. The party doesn't want to accomplish the task. This is a motivation problem. Skill problem. The party doesn't know how to accomplish the task. This is a methods problem. Bill problem. The party doesn't have the resources to accomplish the task. This is a money problem. Nil problem. The party doesn't have the authority to accomplish the task. This is a mojo problem. I have encountered plenty of roles where I am motivated and technically equipped, but without resources and power. I think that is the standard situation for incident responders, i.e., you don't have the evidence needed to

determine scope and impact, and you don't have the authority to change the situation in your favor. What do you think? https://taosecurity.blogspot.com/2008/08/getting-job-done.html Commentary Will, skill, bill, or nil -- what are the problems in your organization?

Is Experience the Only Teacher in Security? Saturday, September 27, 2008 Another reader asked me this question, so I thought I might share it with you: I'm really struggling with... how to communicate risk and adequate controls to the business managers at my employer... To put it bluntly, this is the first time the company has really looked at it [security] at all and they don't really want to deal with it. They have to because of the business we are in though... So while I've got a blazing good example of what doesn't work, I still don't know what does. What are some good resources that you have found in communicating security (or other) risks to business? Are there books, blogs or authors that you would recommend? I've written about this problem in the past, in posts like Disaster Stories Help Envisage Risks and Analog Security is Threat-Centric. I'll be speaking about this problem in my SANS Forensics Summit keynote next month, with the theme of "speaking truth to power." Throughout my career, I've found few managers care about security until they've personally experienced a digital train wreck. Until a manager has had some responsibility for explaining an incident to his or her superiors, the manager has no real frame of reference to understand security. For me, this is a strength of the incident response community. We are absolutely the closest parties to ground truth in security, because we are the ones who manage security failures. The only other party as close to the problem is the adversary, and he/she isn't going to share thoughts on the issue. Therefore, I recommend planning your security improvements, whatever they may be, then waiting for the right moment. Of course you can tell management that you have concerns, but don't be surprised when they ignore

you. When a digital train wreck happens in your enterprise, step forward with your plan and say "I have an answer." In most intrusions managers want someone to tell them everything will be ok, especially when it's wrapped in a documented course of action. Be the person with the plan and you'll have greater success moving your security projects forward. Does anyone else have suggestions for this blog reader? https://taosecurity.blogspot.com/2008/09/is-experience-only-teacher-insecurity.html Commentary I’ve heard that security is like driving the highway in Los Angeles in a pre-pandemic world. You’re crawling most of the time, but when you see an opening in the traffic you drive like crazy! That’s because you know where you’re going and how to get there.

Why Blog? Saturday, September 27, 2008 Recently a group of managers at work asked me to explain why I blog. This is a very good question, because the answer might not be intuitively obvious. Perhaps by sharing my rationale here, I might encourage others to blog as well. Blogging organizes thoughts. Recently I nodded in agreement when I heard a prolific author explain why he writes. He said the primary purpose for writing his latest book was to organize his thoughts on a certain topic. Writing an entire book is too much for most of us, but consolidating your ideas into a coherent statement is usually sufficient. Blogging captures and shares thoughts. Once your thoughts are recorded in electronic form, you can refer to them and point others to them. If I am asked for an opinion, I can often point to a previous blog post. If the question is interesting enough, I might write a new post. That satisfies this reason and the previous one. Blogging facilitates public self-expression. This is a positive aspect of the modern Web, if approached responsibly. Many social networking sites contain information people would not want to preserve for all time, but a carefully nurtured blog can establish a positive presence on the Web. If you blog on certain topics that interest me, I am going to recognize you if you contact me. Blogging establishes communities. The vast majority of the blogs I read are professionally-oriented (i.e., digital security). I follow blogs of people handling the same sorts of problems I do. I often meet other bloggers at conferences and can easily speak with them, because I've followed their thoughts for months or years. Book authors share a similar trait, although books are a much less fluid medium. Blogging can contribute original knowledge faster than any other

medium. Blogging is just about the easiest way to contribute knowledge to the global community that I can imagine. It costs nothing, requires only literacy, is easily searchable, and can encourage feedback when comments are supported. Why do you blog? And if you don't, why not? https://taosecurity.blogspot.com/2008/09/why-blog.html Commentary And now, years after writing these posts, I know what I was thinking back then, and can share it with you, dear reader.

Defining the Win Monday, November 24, 2008 In March I posted Ten Themes From Recent Conferences [reproduced elsewhere in these volumes], which included the following: Permanent compromise is the norm, so accept it. I used to think digital defense was a cycle involving resist -> detect -> respond -> recover. Between “recover” and the next attack, there would be a period where the enterprise could be considered "clean." I've learned now that all enterprises remain "dirty" to some degree, unless massive and costprohibitive resources are directed at the problem. We can not stop intruders, only raise their costs. Enterprises stay dirty because we can not stop intruders, but we can make their lives more difficult. I've heard of some organizations trying to raise the $ per MB that the adversary must spend in order to exfiltrate/degrade/deny information. Since then I've grappled with this idea of how to define the win. If you used to define the win as detecting and ejecting all intruders from your enterprise, you are going to be perpetually disappointed (unless your enterprise is sufficiently small). Are there are alternative ways to define the win if you have to accept permanent compromise as the norm? The following are a few ideas, credited where applicable. The first two come from my post Intellectual Property: Develop or Steal [also reproduced elsewhere in these volumes], but I repost them here for easy reference. Information assurance (IA) is winning, in a broad sense, when the cost of stealing intellectual property via any means is more expensive than developing that intellectual property independently. Nice idea, but probably too difficult to measure. IA is winning, in a narrow sense, when the cost of stealing intellectual

property via digital means is more expensive than stealing that data via nontechnical means (such as human agents placed inside the organization). Still difficult to measure, but might be estimated using red teaming/adversary simulation/penetration testing. IA is winning when detection operations can see the adversary's actions. This relates to Bruce Schneier's classic advice to Monitor First. The more mature answer is next. IA is winning when incident responders can anticipate the adversary's next target. I credit Kevin Mandia with this idea. I like it because it shows that complex enterprises will always have vulnerabilities and will always be targeted, but a sufficiently mature detection and response operation will at least be able to guess the intruder's next move. You can even test this by keeping a track record. IA is winning when the time to detect and remediate has been reduced to B. Insert your own value there. You can track your progress from time A to time B. IA is winning when your enterprise security integrity assessments show less than D percent of your assets are compromised. You can track progress from C percent to D percent over time. This leads to the more mature version which follows. IA is winning when your enterprise intrusion debt is reduced to F. You can measure intrusion debt as you like and take steps to reduce it from E to F. Does anyone else have ideas on how to define the win? https://taosecurity.blogspot.com/2008/11/defining-win.html Commentary These are the tough choices that must be made when you defend large, complicated, high value enterprises that are constantly targeted by high-end threat actors. This mindset was later called “assumption of breach.”

Advice to Bloggers Friday, January 30, 2009 Recently a blog reader asked two questions as he started his own new blog: 1. Do you think I should stick to just one topic? i.e. Digital Forensics? 2. Do you think blogging is a good way to learn more about a topic of interest or should you only blog about a topic you already know a lot about? I addressed some of these issues in my post Why Blog?, but I'll add the following. I recommend writing about a handful of topics, but stick to topics within a certain theme. For example, my blog covers "digital security and the practices of network security monitoring, incident response, and forensics." Although I love martial arts and ice hockey, I don't write about that here. I also do not address politics, family, religion, or any other non-technical issues in this forum. I believe blog readers prefer me to stay on my listed subjects; they can visit other sites for non-technical information. I believe it is ok to write about subjects that are outside your core expertise, but you need to warn the reader that you are a beginner. Do not presume to be an authority on a subject that is new to you. Tell the reader and let him or her be the judge. If you don't know a lot about a topic, but you want to solicit assistance, say that in your post. If you make a habit of discussing topics that are foreign to you, you will probably not be respected, however. I don't think many readers want to visit a blog that is constantly asking how to accomplish a task. Most readers want to learn something or see a new viewpoint, not be asked questions all the time. https://taosecurity.blogspot.com/2009/01/advice-to-bloggers.html

Commentary These are still the keys to writing a good blog, in my opinion. You can extend them to any medium.

How Much to Spend on Digital Security Sunday, June 14, 2009 A blog reader recently asked the following question: I recently accepted a position and was shocked to learn, I know this shouldn't have happened, that Information Security/Warfare is largely an afterthought even though this organization has had numerous break ins. Many of my peers have held their position for one or even two decades and are great people yet they are not proactively preparing for modern threat/attack vectors. I believe the main difference is that they are satisfied with the status quo and I am not. I have written a five-year strategic plan for IT security which I am now following with a tactical plan on how to get there. with respect to the tactical plan I was wondering what percentage of the IT budget you think an organization should allocate for their InfoSec programs? It would seem that, using Google, many people advocate somewhere between ten and twenty percent of the IT budget. I have no knowledge of our overall IT budget but I do know we aren't anywhere near ten percent. Additionally, how important is the creation and empowerment of a CISO in an organization? Many places still place security under the CIO which I have seen both good and bad examples of. Thank you for your time, it's much appreciated. Regarding the cost question: I don't think anyone should use a rule of thumb to decide how much an organization should spend on digital security. Some would disagree. If you read Managing Cybersecurity Resources [by Lawrence A. Gordon and Martin P. Loeb, 2005], the authors create some fairly specific recommendations, even saying "it is generally uneconomical to invest in cybersecurity activities costing more than 37 percent of the expected loss" (p 80). Of course, one could massage "expected loss" to be whatever

figure you like, so the 37% part tends to become irrelevant. When one tries to define digital security spending as a percentage of an IT budget, you face an interesting issue. First you must accept that the value of the organization's information is the upper bound for any security spending. (In other words, don't spend more money than the assets are worth.) If you base security spending on IT spending, then the entire IT budget becomes the theoretical upper bound for the supposed value of the organization's information. If you arbitrarily decide to shrink the IT budget, following this logic, you are also shrinking the value of the organization's information. This situation holds even if you don't spend more than "37%" of the value of the organization's information on security. Clearly this doesn't make any sense. I have not met anyone with a really solid approach for justifying security spending. "Calculating risk" or "measuring ROI/ROSI" are all subjective jokes. All I can really offer are some guidelines that I try to follow. First, focus on outputs, not inputs. It doesn't matter how much you spend on security (inputs) if the organization is horribly compromised (outputs). Determining how compromised the enterprise is becomes the real priority. Second, like I said in cheap IT is ultimately expensive, "security is an IT problem, not a 'security' problem. The faster asset owners realize this and be held responsible for the security of their systems, the less intrusion debt will mount and the greater the chance that enterprise assets will survive digital earthquakes." Security teams don't own any assets, other than the infrastructure supporting their teams. Asset owners are ultimately responsible for security because they usually make the key decisions over the asset value and vulnerabilities in their assets. The best you can do in this situation is to ask asset owners to imagine a scenario where assets A, B, and C are under complete adversary control, and could be rendered useless in an instant by that adversary, and then let them tell you the impact. If they say there is no impact, you should report that the asset is worthless and should be retired immediately. That will probably get

the asset owners' attention and start a real conversation. Third, continue to tell anyone who will listen what you need to do your job, and what is lost as a result of not being able to do your job. Asset owners have a perverse incentive here, because the less they let the security team observe the score of the game (i.e., the security state of their assets), the less able the security team is able to determine the security posture of the enterprise. You've got to find allies who are more interested in speaking truth to power than living in Potemkin villages. Regarding this CISO question: I believe the jury is out on where the CISO should sit. When reporting to the CTO and/or CIO, the CISO is one of many voices competing for attention. When working for the CTO and/or CIO, the position of the CISO probably reinforces the notion that the CTO and/or CIO somehow own the organization's information, and hence require security expertise from the CISO to secure it. However, I am developing a sense that the asset owners, i.e., the profit and loss (P/L) entities in the organization, should be formally recognized as the asset owners. In that respect, the CISO should operate as a peer to the CTO and/or CIO. In their roles, the CTO and/or CIO would provide services to the asset owners, while the CISO advises the asset owners on the costbenefit of security measures. Note that when I say "asset" I'm referring to the real information asset in most organizations: data. Platforms tend to be worth far less than the data they process. So, the CTO and/or CIO might own the platform, but the P/L owns the data. The CISO ensures the data processed by the CTO and/or CIO is kept as secure as possible, serving the asset owner's interests first. I would be interested in hearing other opinions on both of these questions. Thank you. https://taosecurity.blogspot.com/2009/06/how-much-to-spend-on-digitalsecurity.html Commentary You can tell my experience working at GE was heavily influencing my

thought here, in a good way.

Partnerships and Procurement Are Not the Answer Wednesday, October 28, 2009 The latest Federal Computer Week magazine features an article titled “Cyber warfare: Sound the alarm or move ahead in stride?” I'd like to highlight a few excerpts. Military leaders and analysts say evolving cyber threats will require the Defense Department to work more closely with experts in industry... Indeed, the Pentagon must ultimately change its culture, say independent analysts and military personnel alike. It must create a collaborative environment in which military, civilian government and, yes, even the commercial players can work together to determine and shape a battle plan against cyber threats... Ok, that sounds nice. Everyone wants to foster collaboration and communication. Join hands and sing! “Government may be a late adopter, but we should be exploiting its procurement power,” said Melissa Hathaway, former acting senior director for cyberspace for the Obama administration, at the ArcSight conference in Washington last month... Hmm, "procurement power." This indicates to me that technology is the answer? Although one analyst praised the efforts to make organizational changes at DOD, he also stressed the need to give industry more freedom. “The real issue is a lack of preparedness and defensive posture at DOD,” said Richard Stiennon, chief research analyst at independent research firm IT-Harvest and author of the forthcoming book "Surviving Cyber War."

“Private industry figured this all out 10 years ago,” he added. “We could have a rock-solid defense in place if we could quickly acquisition through industry. Industry doesn’t need government help — government should be partnering with industry.” Hold on. "Private industry figured this all out?" Is this the same private industry in which my colleagues and I work? And there's that "acquisition" word again. Why do I get the feeling that technology is supposed to be the answer here? Industry insiders say they are ready to meet the challenge and have the resources to attract the top-notch talent that agencies often cannot afford to hire. That's probably true. Government civilian salaries cannot match the private sector, and military pay is even worse, sadly. Industry vendors also have the advantage of not working under the political and legal constraints faced by military and civilian agencies. They can develop technology as needed rather than in response to congressional or regulatory requirements or limitations. I don't understand the point of that statement. Where do military and civilian agencies go to get equipment to create networks? Private industry. Except for certain classified scenarios, the Feds and military run the same gear as everyone else. “This is a complicated threat with a lot of money at stake,” said Steve Hawkins, vice president of information security solutions at Raytheon. “Policies always take longer than technology. We have these large volumes of data, and contractors and private industry can act within milliseconds.” Ha ha. Sure, "contractors and private industry can act within milliseconds" to scoop up "a lot of money" if they can convince decision makers that procurement and acquisition of technology are the answer! Let's get to the bottom line. Partnerships and procurement are not the

answer to this problem. Risk assessments, return on security investment, and compliance are not the answer to this problem. Leadership is the answer. Somewhere, a CEO of a private company, or an agency chief, or a military commander has to stand up and say: I am tired of the adversary having its way with my organization. What must we do to beat these guys? This is not a foreign concept. I know organizations that have experienced this miracle. I have seen IT departments aligned under security because the threat to the organization was considered existential. Leaders, talk to your security departments directly. Listen to them. They are likely to already know what needs to be done, or are desperate for resources to determine the scope of the problem and workable solutions. Remember, leaders need to say "we're not going to take it anymore." That's step one. Leaders who internalize this fight have a chance to win it. I was once told the most effective cyber defenders are those who take personal affront to having intruders inside their enterprise. If your leader doesn't agree, those defenders have a lonely battle ahead. Step two is to determine what tough choices have to be made to alter business practices with security in mind. Step three is for private sector leaders to visit their Congressional representatives in person and say they are tired of paying corporate income tax while receiving zero protection from foreign cyber invaders. When enough private sector leaders are complaining to Congress, the Feds and military are going to get the support they need to make a difference in this cyber conflict. Until then, don't believe that partnerships and procurement will make any difference. https://taosecurity.blogspot.com/2009/10/partnerships-and-procurement-

are-not.html Commentary Kevin Mandia was the person who told me “the most effective cyber defenders are those who take personal affront to having intruders inside their enterprise.” He is still right today.

Everything I Need to Know About Leadership I Learned as a Patrol Leader Saturday, May 08, 2010 This post is outside the digital security realm, but I know a lot of my readers are team members and team leaders in their technical shops. I thought it might be useful to share a few thoughts on leadership. I don't claim to be the world's best leader but I've been thinking about the topic recently. I've participated in a lot of "leadership training" over the years, in and out of classrooms. A few examples: I've attended classes at GE's Crotonville, earned a master's degree from Harvard Kennedy School (supposed home to future political leaders), led a flight in the AFCERT, served as a cadet flight commander at USAFA, and captained my high school track team. As the years have progressed I find fewer of these experiences, especially formal training, to be novel or particularly helpful. For example, I believe the approaches I brought to my USAFA experience had less to do with USAFA and more to do with what I already knew. Tonight I decided to think back to where I first learned my "leadership style." I realized that everything I needed to know about leadership I learned as a Patrol Leader, as a Boy Scout. Patrols are the core unit of the troop; they are the unit within a troop that can conduct independent activities, although they collaborate with other patrols during troop-wide events. I spent about 10 years as a Scout (starting as a Cub) and finished (barely) with my Eagle award a few months before I turned 18. My troop first nominated me to become a Patrol Leader when I was about 12. I distinctly remember being a Patrol Leader twice. I led one patrol for my normal troop when I was younger, and then I was nominated to be a Patrol Leader for a regional troop from Massachusetts that attended the 1989 Scout Jamboree when I was 17. I cherished this second experience, because I was basically inactive during the ages of 15 and 16, due to high school. In both cases my patrol probably consisted of no more than 12 kids, usually younger

but not always. So what did I learn as a Patrol Leader? Check out these Ten Tips for Being a Patrol Leader from Scouting.org: Keep Your Word. Don't make promises you can't keep. Be Fair to All. A good leader shows no favorites. Don't allow friendships to keep you from being fair to all members of your patrol. Know who likes to do what, and assign duties to patrol members by what they like to do. Be a Good Communicator. You don't need a commanding voice to be a good leader, but you must be willing to step out front with an effective "Let's go." A good leader knows how to get and give information so that everyone understands what's going on. Be Flexible. Everything doesn't always go as planned. Be prepared to shift to "plan B" when "plan A" doesn't work. Be Organized. The time you spend planning will be repaid many times over. At patrol meetings, record who agrees to do each task, and fill out the duty roster before going camping. Delegate. Some leaders assume that the job will not get done unless they do it themselves. Most people like to be challenged with a task. Empower your patrol members to do things they have never tried. Set an Example. The most important thing you can do is lead by example. Whatever you do, your patrol members are likely to do the same. A cheerful attitude can keep everyone's spirits up. Be Consistent. Nothing is more confusing than a leader who is one way one moment and another way a short time later. If your patrol knows what to expect from you, they will more likely respond positively to your leadership. Give Praise. The best way to get credit is to give it away. Often a

"Nice job" is all the praise necessary to make a Scout feel he is contributing to the efforts of the patrol. Ask for Help. Don't be embarrassed to ask for help. You have many resources at your disposal. When confronted with a situation you don't know how to handle, ask someone with more experience for some advice and direction. You don't need an MBA now, aside from some classes on financial statements. I'd also venture that many MBA classes don't cover these 10 points. I remember being particularly keen on patrol spirit: Patrol spirit is the glue that holds the patrol together and keeps it going. Building patrol spirit takes time, because it is shaped by a patrol's experiences—good and bad. Often misadventures such as enduring a thunderstorm or getting lost in the woods will contribute much in pulling a patrol together. Many other elements also will help build patrol spirit. Creating a patrol identity and traditions will help build each patrol member's sense of belonging. I remember working on our patrol flag and being proud of our new identity. Never mind that we were "Wolverines" (yes, straight out of Red Dawn) but our flag had a panther or cougar on it. (Blame the T-shirt shop for not having a "wolverine" transfer.) We put our patches and name on that thing and that's all that mattered. When I was about 14 my troop nominated me to become Senior Patrol Leader, which is the top boy leader. Unfortunately, it's like a management position, because while you lead the troop most of the activities happen at the patrol level. You end up being more of an intermediary between the adult leaders and the Patrol Leaders. It's an important job but I remember missing having my own patrol. That's one reason I was glad to get a Patrol Leader job with the regional troop attending the Jamboree in 1989. My take-away from this post is to remember the 10 points outlined above when I work with my current team. It's been over 20 years since I left

Scouting, but the lessons I learned there have proven to be timeless and enduring. https://taosecurity.blogspot.com/2010/05/everything-i-need-to-knowabout.html Commentary This is one of my top 10 favorite all-time posts. It is as relevant today as when I wrote in in 2010, and when I learned the lessons in the 1980s. The only addition would be to the “give praise” advice: as you might have heard elsewhere, “praise in public, punish in private.” I would not even use the term “punish” as it is derogatory. Rather, critique behavior, not the person, in private -- never in front of the team or others.

Stop Killing Innovation Monday, November 22, 2010 I hear and read a lot about how IT is supposed to innovate to enable "the business." Anytime I see "IT" in one part of a sentence and "the business" in another, a little part of me dies. Somewhere there is a Nirvana where "thought leaders" understand that there is no business without IT, that IT is as part of the business as the sales person or factory worker or janitor, and that IT would be better off not constantly justifying its existence to "the business." But I digress. I want to address the "innovation" issue in this post. CIO magazine recently published an interview with Vinnie Mirchandani titled Taking Business Risks With Your IT Budget. I liked what Mr Mirchandani had to say, although I'm going to omit his multiple references to "cloud." Instead, consider how he sees innovation in IT: More [CIOs] want to be [innovators], but organizations don’t let them... In the 1980s, we talked about IT as a competitive advantage... In the 1990s, we didn’t hear much of that at all, and IT started reporting to CFOs. In the early 2000s, the CFO made IT a compliance function for auditing and security. We’ve beaten the innovation out of CIOs at many companies. We want them to be risk mitigators, not innovators. People are afraid to be associated with any failure. They buy IT from vendors that are safe choices. They know they’re overspending, yet they do it anyway... Mr Mirchandani doesn't say this, but he could have also mentioned that many managers expect CIOs to be "productivity engines," meaning they inherently shrink their budget every year. This drives cost reduction as the primary goal for an IT shop -- not innovation. It's like expecting the business development team to concentrate on decreasing the amount of money spent

per new customer acquired, while not caring so much on the quantity or quality of the new customers -- if any! So what to do? The best thing they could do is get out from under the CFO. Go to your CEO and say, “I want to report to you.” Make sure the CFO doesn’t stand in the way. Some CIOs will get fired for doing that. Others will get a chance... Cost pressure isn't limited to those who only report to the CFO, but he doesn't address that issue. The shocking thing about corporate IT is that without realizing it, 85 percent to 90 percent of the IT spend is with a vendor, including outsourcers and the staff you buy from them... When you’re spending 90 percent of your money with a vendor, you have only a sliver left for [internal] talent — yet it’s with your own internal talent that you can innovate. There’s very little left for CIOs to innovate with. The more progressive CIOs are saying they’ve overdone it with outsourcing and are starting to hire their own enterprise architects and business analysts and other strategic resources. To me this is the crux of the issue. Businesses cannot outsource innovation. Businesses can crush innovation pretty easily though. I found one comment he made about the cloud to be very interesting: CIOs resist it. It’s not secure, they say. It’s not always available. CIOs say cloud vendors go down too often. I know CIOs who haven’t run a full disaster-recovery drill for years and turn around and say that the cloud isn’t production-ready. So, my message to readers is this: if cost-out, five nines uptime, outsourced workforces, and other failed strategies are your goal, forget

innovation. If you want innovation to thrive, try considering the alternatives. https://taosecurity.blogspot.com/2010/11/stop-killing-innovation.html Commentary There are two types of IT organizations. One cost-optimizes, and the other innovates. Cost-optimization is best for mature processes that would not likely benefit from any change. Innovation is appropriate everywhere else.

All Reading Is Not Equal or Fast Thursday, March 31, 2011 Four years ago I posted Reading Tips, where I offered some ideas on how to read technical books. Recently I've received emails and questions via Twitter on the same subject. In this post I'd like to offer another perspective. Here I will introduce different "types of reading." In other words, I don't see all reading as equal, and what some people might call "reading," I don't consider to be reading at all! After reading this post you may find you can adopt one or more (or really all) methods in your own knowledge journey. The key to this post is to recognize that different types of reading exist, and you have to decide how you are going to approach a book, article, or other printed resource. My list follows. Proofreading is a very intense activity where the reader scrutinizes every aspect of a book. The reader pays attention to technical accuracy, grammar, production value (quality of screen captures, etc.) and all other customerfacing elements. This is usually a paid activity because it can be very demanding and time-consuming! I doubt most people find themselves in this situation, but I have been hired in the past to do this sort of work. Reading for correctness is a subset of proofreading where the reader focuses on the accuracy of the written material. For example, is the author correct when he says the TCP three way

handshake (TWH) is SYN ACK -> SYN ACK -> ACK? Wrong! (True story.) Here the reader is trying to see if the author knows what he is talking about. I usually enter this mode when I smell blood in the water. In other words, when I encountered the wrong TWH in a book years ago, I continued hunting errors until I was mentally exhausted. This is an unpleasant form of reading reserved for error-prone books. Once an author proves he or she knows the material I usually don't enter this mode. I only read for correctness as preparation to write a book review of a technically inaccurate book. Memorization is another intense reading form, usually reserved for academic classes. If you've had to study for a biology test, you've probably read for memorization purposes. If reading for memorization, I will likely heavily mark the text and create independent, supplementary materials like flash cards. Yes, on real index cards! The act of writing the material helps activate other areas of the brain to memorize information. Thankfully I haven't had to do this sort of reading in years, or at least not regularly. I have had to memorize information for amateur radio license tests, and I like creating flash cards for that information. Reading for learning is one of my common modes. With this approach I mark up a text (generally underlining or bracketing key terms and sections) and add comments or questions in margins. You might think the previous (and possibly the subsequent) reading modes are all about learning too, but simple learning for me is a more relaxed endeavor compared to memorization or correctness. The goal of learning is to be able to remember a subject, preferably well enough to at least describe it (but not teach it) to a third party. Reading for learning is as fast as you are able to absorb material. Reading for practice is closely related to learning, but it involves material that has an operational aspect. For example, reading a programming book for practice, for me, involves trying the code examples, and even better

trying the sample exercises. Practice is a more active form compared to learning. With learning I might be able to explain a pointer, but with practice I could write a program using one. Due to the hands-on manner, this is a slow form of reading. Reading for familiarization is another one of my more common reading forms. Here I am just trying to understand the author without necessarily planning to implement his or her concepts in real life. For example, I plan to read a book on Windows internals in April, but I do not plan to become a Windows kernel programmer. Reading for familiarization is probably the fastest way to read a technical book and still derive value from it. I may or may not mark up a book for familiarization purposes. Reading for reference starts to enter the gray area of possible "fake reading." If you only read a few sections or chapters of a book, have you really "read it?" For example, I've relied on the massive book Unix Power Tools, but because I've only referenced parts of it, I've never formally reviewed it. In my opinion, unless you heavily reference a book over time, you're not really reading at the level the warrants a review. Sampling is not reading. Top Amazon book reviewer frauds, this means you. Looking at the front cover, back cover, index, table of contents, and a few sample pages doesn't make you qualified to write a book review. The sorts of people who write more than a few book reviews per day are the fakers who consider "sampling" to be "reading." Reading for entertainment is not generally an approach I take with technical books! Sure, I enjoy them, but it's not like reading a classic fiction book. When reading a nontechnical work, I tend to devour pages. I'm not sure if that's good or bad, but it's exceptionally fast since the emotional component engages additional brain components that would allow me to later describe

the content should I wish to do so. How does reading for reviews fit in? In my view, as long as you're not "sampling" or reading for reference, any of the methods above qualify for writing a review. I suggest adding one component to your reading process to assist with review writing: keep a separate notebook and take notes as you read. Be very specific, e.g., "p 121 had this quote... etc." The more notes you take, the easier your review will be to write. So what does this mean if you want to know "how does Bejtlich read so many books?" The answer is to decide just how you want to read a book. When I read a book on C or Windows Internals in April, I will likely be reading for familiarization. I don't plan to be a C coder or Windows developer, but I do want to be conversant in certain topics. If I get really motivated I will turn to my PC and try some examples. (In fact, I'll probably do that for a book on coding for Windows, since I've never done that before.) What this means is that I, reading for familiarization, will probably read faster than someone else reading for practice, or memorization, or another time-consuming purpose. It all depends on your goal! On another day I may be reading for practice because I really want to know more about a topic, and then I'll be slower and more engaged. Incidentally, the more you read, the faster you will likely become. I don't think improving your reading is limited to children, either (although my daughters are pretty scary in terms of speed). Don't overdo it though. I would not be surprised to learn that chemical reactions are involved with reading, especially the more intense learning modes. In some cases I can feel my ability to absorb material shutting down, and at that point there is really no reason to continue. Take a break. I also advise against reading in bed, although this is a truly personal opinion. For some people, it works great. I don't make it past five minutes! If you have questions on this post, please comment here. I have to moderate everything so it may take me a while to notice them. Thank you.

https://taosecurity.blogspot.com/2011/03/all-reading-is-not-equal-orfast.html Commentary I once tried to read a highly-rated book on “how to read a book,” to see if I could learn anything else about the process. I abandoned it after a few chapters. This post is a good start for anyone trying to read with intention.

Answering Questions on Reading Tips Friday, April 01, 2011 A few of you asked questions via Twitter or comments on my All Reading Is Not Equal or Fast post, so I'll try answering them here. When you review a book that was less than perfect or heck even one that was perfect could you also suggest some alternatives? I'll be honest. That could be more work than I'm willing to do in a free forum like Amazon.com and this blog. Sometimes I mention alternatives because they're fresh in my mind and I like the other options. Always mentioning alternatives can be a real chore. If I wrote reviews for formal publication I would do that. Otherwise, I recommend subscribing to my Amazon.com review RSS feed and staying current with my reviews. Where do you find the time to read the books? After family-time, work time and sleep-time..at what time of the day do u read and how much time do you invest? I keep trying to read books but I read 2-3 pages per day at night...thanks! When work is really busy, I probably read the most when on the road. I try to get to airports early, so I could have 30 to 60 minutes at the gate. On the flight I hardly ever watch the movie(s) or work on a computer. I pretty much always read a technical book or read The Economist. Planes are especially good for concentrating my attention because I have no alternative and no distractions! When I don't travel, I like to make some time early Saturday and Sunday mornings. I might also read a little at night, when my wife does the same. Also, be prepared to read. Think one book will keep you busy on a trip? Take two. What if you're stuck at the airport, etc.? Whenever I take mass transit, I take something to read with me. The same goes for any time I expect to wait somewhere, like a doctor's office, before a meeting, and so on. These little stretches of time add up. And, if you face an unexpected delay, the little

stretch becomes a reading-productive big stretch. How do you maintain your list of books to read throughout the year? Do you look at upcoming books from specific publishers, books referenced in conferences and presentations, does Amazon offer preorder recommendations and reviewer copies? How do you prioritize such a list? Every once in a while I access this Amazon.com search page and do a keyword search for computer security terms, ordered by publication date. I review the results and concentrate on titles from the mainstream publishers like Pearson imprints (Addison-Wesley, etc., including Cisco Press), No Starch, Wiley, Osborne/McGraw-Hill, Apress, O'Reilly (including Microsoft Press), Wrox, and Syngress. I never read Auerbach (sorry guys). I pretty much avoid everything else. You have to publish something extraordinary to catch my attention otherwise. Examples include books on FreeBSD or other BSD topics. This method usually catches all books I care about in the next 9-15 months. I am rarely surprised, but that can happen! As a backup I subscribe to the blogs of major publishers who provide feeds on upcoming books (hint to publishers who do not do this -- you should!) If I know and like the author already, I'll add the book to my Amazon.com Wish List immediately. I assign a priority based on how many months until the book will be published. I use Highest for published books and Lowest for books the farthest in the future. Next I add books to my formal reading list. I usually have a queue stretching 9-12 months. My goal since probably 2000 or 2001 was to finish a calendar year having read all books available on my list, but it's never happened! (Will this be the year??) My current list is more or less grouped by themes. I order the books based on the knowledge or familiarity I expect to need in order to understand the book. Hence, my current list shows books on C and Windows prior to books on exploitation development and debugging Windows.

If a book seems really interesting, I'll put it on my schedule when the book is expected to be published. That may require rescheduling my reading. Not meeting my schedule can also force me to change the list. The toughest part of my process involves seeing a book with an interesting title and subject written unknown author. Sometimes I'll take a leap of faith and add the book to my Wish List and reading schedule. Other times I'll wait until I can flip through it in the store. I always keep my Wish List and reading schedule synchronized, so you won't see me Wishing a book but not having it planned for a certain month. How do you tackle/review books that are only distributed digitally? I have yet to encounter this problem but I expect to at some point in 2012. I imagine by that time I'll just read the new book on an iPad or similar. I'll probably rely on note-taking on a separate piece of paper. Thank you for your questions! https://taosecurity.blogspot.com/2011/04/answering-questions-onreading-tips.html Commentary I did not consider it until reading this post, but this was around the high point of my reading process. Eventually I reached a limit regarding what I was learning from books, and to some degree I believe technical publishing declined a bit. The exception was, and has been for years, the amazing books by Michael W. Lucas. If you need to understand a technical topic, the book he’s written on it should be your first stop!

Five Qualities of Real Leadership Saturday, May 21, 2011 I've noticed coverage of "leadership" in IT magazines recently, but I'm not comfortable with the approach they take. For example, this editorial in CIO Magazine titled “Leadership Isn't a Fairy Tale After All” has "Personal attention and hands-on involvement can make good IT managers great IT leaders" as the subtitle. The text then says: Our story spells out detailed tactics and practical ideas that CIOs can use to turn good IT managers into potentially great IT leaders... You’ll notice a strong thread of personal attention and hands-on involvement from the very top at the companies developing a strong bench of future leaders. At REDACTED, for example, the CEO walks the walk on one-toone leadership development by holding regular career conversations with his senior leadership team. His CIO, REDACTED, then makes sure that style of direct communication flows downward to the IT team. “If you don’t take time to talk to people about their professional development,” REDACTED notes, “it just doesn’t get done.” REDACTED is another bright light in this realm with a program called The Lab, which fosters leadership development across various business units by bringing together 30 of them at a time to form strategic problem-solving teams. And at REDACTED, CIO REDACTED connects on a more personal level, emailing coffee-talk questions to her global staff every two weeks to get conversations going on everything from personal dreams to world views. In my opinion, "regular career conversations" are a form of coaching, not leadership. Forming "strategic problem-solving teams" is management, not leadership. Finally, "emailing coffee-talk questions" is banter, not leadership.

So what are the five qualities of leadership, at least in my experience? Leaders develop and execute a vision; they do not follow trends set by others. Leaders embody strong core values and do not sacrifice those core values in order to advance their personal careers. Leaders' actions demonstrate a focus on their people, not themselves, and that focus on the people takes care of the mission. Leaders work to "make their people look good," rather than making the boss or themselves look good. In the darkest hours, leaders put themselves personally at risk for the good of their team. Notice the contrast between these five principles and the previous guidance. My focus is on actions, whereas the other ideas focus on communication. I do not discount the value of communication, but with leadership the deeds matter far more than the words. It is helpful to have coaching, mentoring, managing, and so forth, but these concepts are separate from leadership. Have you seen the movie We Were Soldiers, based on the book by Lt Gen Hal Moore and Joe Galloway? Then-Lt Col Moore (portrayed by Mel Gibson) always landed with his air cavalry troops, in the first helicopter, and was the first person to step foot on adversary soil. He was also the last person to leave. As he wrote: When we step on the battlefield, I will be The First Boots On and the Last Boots Off. And he didn't just say it, he did it. That's a leader. https://taosecurity.blogspot.com/2011/05/five-qualities-of-realleadership.html Commentary

I had dinner with Col Moore when he visited the Air Force Academy in late 1992 or sometime in 1993. He met with cadets who were history majors and then spoke to a larger group about leadership. I had no idea that I was in the presence of true greatness. I only learned his story after he provided me a signed copy of his book.

I Want to Detect and Respond to Intruders But I Don't Know Where to Start! Monday, February 13, 2012 "I want to detect and respond to intruders but I don't know where to start!" This is a common question. Maybe you have a new security role in an organization, or a new service or business in your current organization, or some other situation where you want to find and stop attackers. However, you have no idea where to begin. Do you have the data you need? If not, what should you add? What do intrusions look like in the data you collect? These questions can be tough to answer from a purely theoretical perspective. I propose the following approach. First, conduct a tabletop exercise where you simulate adversary actions. At each stage of the imagined attack, consider what evidence an intruder might create while taking actions against your systems. For example, if you are trying to determine how to detect and respond to an attack against a Web server, you're almost certainly going to need Web server logs. If you don't currently have access to those logs, you've just identified a gap that needs to be addressed. I recommend this sort of tabletop exercise first because you will likely identify deficiencies at low cost. Addressing them might be expensive though. Second, conduct a technical exercise where a third party simulates adversary actions. This is not exactly a pen test but it is the sort of work a red team conducts. Ask the red team to carry out the attacks you previously imagined to determine if you can detect and respond to their activity. This should be a controlled action, not an "anything goes" event. You will see whether the evidence and processes you identified in the first step help you detect and respond to the red team activity. This step is more expensive than the previous because you are paying for red team attention, and again fixes could be expensive. Third, you may consider re-engaging the red team to carry out a less

restrictive, more imaginative adversary simulation. In this exercise the red team isn't bound by the script you devised previously. See if your improved data and processes are sufficient. If not, work with the red team to devise better detection and response so that you can handle their attacks. At this point you should have the data and processes to deal with the majority of real-world attacks. Of course some intruders are smart and creative, but you have a chance against them now given the work you just performed. https://taosecurity.blogspot.com/2012/02/i-want-to-detect-and-respondto.html Commentary This is an example of a post that I included to show that my viewpoint appears to have changed. I advised that organizations do “compromise assessments” when I offered consulting through TaoSecurity LLC in 20052007, so I am not sure why I focused here on data collection. My answer to this question would be, and has been for many years (although not in this post!) that the place to start is with a compromise assessment. Anytime a security leader becomes responsible for a new environment, his or her first responsibility is to determine if that environment is already compromised. If the local security team is not capable of performing that work, or cannot prove that it has been providing equivalent services in the recent past, then the new security leader should contract with a reputable third party specializing in compromise assessment. It’s like a professional athlete getting a physical before being traded to a team. You have to know your posture before you can make rational decisions. Without knowledge of your posture, you could begin a new security program with the adversary watching over your shoulder. He will adjust his behavior and presence to avoid detection while maintaining access to your resources. That is a recipe for failure!

Understanding Responsible Disclosure of Threat Intelligence Wednesday, September 19, 2012 Imagine you're hiking in the woods one day. While stopping for a break you happen to find a mysterious package off to the side of the trail. You open the package and realize you've discovered a "dead drop," a clandestine method to exchange messages. You notice the contents of the message appear to be encoded in some manner to defy casual inspection. You decide to take pictures of the package and its contents with your phone, then return the items to the place you found them. Returning home you eagerly examine your photographs. Because you're clever you eventually decode the messages captured in your pictures. Apparently a foreign intelligence service (FIS) is using the dead drop to communicate with spies in your area! You're able to determine the identities of several Americans working for the FIS, as well as the identities of their FIS handlers. You can't believe it. What should you do? You decide to take this information to the world via your blog. You found the messages on your own, and you did the work to understand what they mean. If the press reads about your discovery, they'll likely take it farther. You consider going to the press first, but you decide that it won't hurt to drive traffic to your own blog first. You might even be able to launch that small private investigator practice you've always wanted! After publishing your post, the press indeed notices, and publishes an expose featuring an interview with you. Several US intelligence agencies also notice. They had been monitoring the dead drop themselves for a year, and had been working a complex joint case against all of the parties you identified. Now all of that work is ruined.

Before the intelligence agencies can react to your disclosure, the targets of their investigation disappear. They will likely be replaced by other agents quickly enough, using other modes of communication unknown to the US agencies. The FIS will alter their operation to account for the disclosure, but it will continue in some form. That is the problem with irresponsible disclosure. To apply the situation to the digital security world, make the following changes. Substitute "command and control server" for "dead drop." Substitute "tools, exploits, and other digital artifacts" for "messages." When the adversary learns of the disclosure, they move to other C2 infrastructure and develop or adopt new tools, tactics, and procedures (TTPs). What should the hypothetical "security researcher" have done in this case? It's fairly obvious he should have approached the FBI himself. They would have realized that he had stumbled upon an active investigation, and counseled him to stay quiet for the sake of national security. What should "security researchers" in the digital world do? This has been an active topic in a private mailing list in which I participate. We've been frustrated by what many of us consider to be "irresponsible disclosures." We agree that sharing threat intelligence is valuable, but we prefer to keep the information within channels among peers trusted to not alert the adversary to our knowledge of intruder TTPs. Granted, this is a difficult line to walk, as I Tweeted yesterday: Responsible security intel teams walk a fine line between sharing for the benefit of peers and risking disclosure to the detriment of all. The best I can say at this point is to keep this story in mind the next time you stumble upon a package in the woods. The adversary is watching.

https://taosecurity.blogspot.com/2012/09/understanding-responsibledisclosure-of.html Commentary I’m sure some readers digested this post and thought, “See! Any time you disclose intelligence, it’s bad for security!” Maybe others had the opposite reaction. My opinion, not having read this post since I wrote it, is that I am glad I included the section to approach the professional intelligence community before deciding to take any action. If you indeed discovered something sensitive, the IC is likely to tell you and ask you to not disclose it. They could also go beyond “asking” and threaten legal action, which might dissuade some of you from approaching them in the first place. It comes down to your levels of trust in government agencies, which ranges from zero for some readers to “absolute” for others.

Don't Envy the Offense Sunday, December 28, 2014 Thanks to Leigh Honeywell I noticed a series of Tweets by Microsoft's John Lambert. Aside from affirming the importance of security team members over tools, I didn't have a strong reaction to the list -- until I read Tweets nine and ten. Nine said the following: 9. If you shame attack research, you misjudge its contribution. Offense and defense aren't peers. Defense is offense's child. I don't have anything to say about "shame," but I strongly disagree with "Offense and defense aren't peers" and "Defense is offense's child." I've blogged about offense over the years, but my 2009 post Offense and Defense Inform Each Other is particularly relevant. John's statements are a condescending form of the phrase "offense informing defense." They're also a sign of "offense envy." John's last Tweet said the following: 10. Biggest problem with network defense is that defenders think in lists. Attackers think in graphs. As long as this is true, attackers win. This Tweet definitely exhibits offense envy. It plays to the incorrect, yet too-common idea, that defenders are helpless drones, while the offense runs circles around them thanks to their advanced thinking. The reality is that plenty of defenders practice advanced thinking, while even nation-state level attackers work through checklists. At the high end of the offense spectrum, many of us have seen evidence of attackers running playbooks. When their checklist ends, the game may be up, or they may be able to ask their supervisor or mentor for assistance. On the other end of the spectrum, you can enjoy watching videos of lower-skilled intruders fumble around in Kippo honeypots. I started showing

these videos during breaks in my classes. I believe several factors produce offense envy. First, many of those who envy the offense have not had contact with advanced defenders. If you've never seen advanced defenders at work, and have only seen mediocre or nonexistent defense, you're likely to mythologize the powers of the offense. Second, many offense envy sufferers do not appreciate the restrictions placed on defenders, which result in advantages for the offense. I wrote about several of these in 2007 in Threat Advantages -- namely initiative, flexibility, and asymmetry of interest and knowledge. (Please read the original post if the last two prompt you to think I have offense envy!) Third, many of those who glorify offense hold false assumptions about how the black hats operate. This often manifests in platitudes like "the bad guys share -- why don't the good guys?" The reality is that good guys share a lot, and while some bad guys "share," they more often steal, back-stab, and inform on each other. It's time for the offensive community to pay attention to people like Tony Sager, who ran the Vulnerability Analysis and Operations (VAO) team at NSA. Initially Tony managed independent blue and red teams. The red team always penetrated the target, then dumped a report and walked away. Tony changed the dynamic by telling the red team that their mission wasn't only to break into a victim's network. He brought the red and blue teams together under one manager (Tony). He worked with the red team to make them part of the defensive solution, not just a way to demonstrate that the offense can always compromise a target. Network defenders have the toughest job in the technology world, and increasingly the business and societal worlds. We shouldn't glorify their opponents. Note: Thanks to Chris Palmer for his Tweet -- "He [Lambert] reads like a defender with black hat drama envy. Kind of sad." -- which partially inspired

this post. https://taosecurity.blogspot.com/2014/12/dont-envy-offense.html Commentary The two Tweets by John Lambert really rankled me. I’m glad I wrote this post and I stand behind it 6 years later. Seriously, who could write “Offense and defense aren't peers. Defense is offense's child.” Ugh.

How to Answer the CEO and Board Attribution Question Tuesday, January 27, 2015 Earlier today I Tweeted the following: If you think CEOs & boards don't care about #attribution, you aren't talking to them or working w/them. The 1st question they ask is "who?" I wrote this to convey the reality of incident response at the highest level of an organization. Those who run breached organizations want to know who is responsible for an intrusion. As I wrote in Five Reasons Attribution Matters, your perspective on attribution changes depending on your role in the organization. The question in the title of this blog post is, however, how does one answer the board? It's likely that the board and CEO will be asking the CIO or CISO "who." What should be the response? My recommendation is to respond "how badly do you want to know?" Generally speaking, answering the attribution question is a function of the resources applied to the problem. For example, I once performed an incident response for a Fortune 50 technology and retail company. They were so determined to identify the intruder that they hired former law enforcement officials, working as private investigators (PIs), to answer the question from the "physical world" perspective. In collaboration with local, federal, and foreign law enforcement officials, the PIs followed leads all the way to Romania. They performed surveillance on the suspect, interviewed his circle of associates, and eventually confirmed his involvement. Unfortunately for both the victim company and the perpetrator, the suspect disappeared. The suspect's family and friends believed that his "employer," an organized crime syndicate, decided the situation had gained too much publicity and that the suspect had

become a liability. The breached organization in my example decided to call in PIs and outside IR consultants once their annual loss rate exceeded $10 million. That was a CEO and board decision. The answer would affect how they conducted business, in a myriad of ways well outside that of IT or information security. Clearly not every intrusion is going to merit PIs, IR consultants, international legal cooperation, and so on. However, some cases do merit that attention, and attribution can be done. To more fully answer the question, I strongly recommend reading “Attributing Cyber Attacks” by Dr Thomas Rid and Ben Buchanan. They discuss the merits of attribution and the importance of communication, as depicted in their Q model. I know some CEOs and board members read this blog. Other readers work in different capacities. Both points of view are relevant, as mentioned in my previous blog post. I hope this post helps those in the technical world to understand the thought process of those in the nontechnical world. https://taosecurity.blogspot.com/2015/01/how-to-answer-ceo-and-boardattribution.html Commentary I expect that I address attribution elsewhere in these volumes, so I will save my comments for a later post, after I have developed the arguments further.

My Federal Government Security Crash Program Wednesday, June 10, 2015 In the wake of recent intrusions into government systems, multiple parties have been asking for my recommended courses of action. In 2007, following public reporting on the 2006 State Department breach, I blogged When FISMA Bites, Initial Thoughts on Digital Security Hearing. and What Should the Feds Do. These posts captured my thoughts on the government's response to the State Department intrusion. The situation then mirrors the current one well: outrage over an intrusion affecting government systems, China suspected as the culprit, and questions regarding why the government's approach to security does not seem to be working. Following that breach, the State Department hired a new CISO who pioneered the "continuous monitoring" program, now called "Continuous Diagnostic Monitoring" (CDM). That CISO eventually left the State department for DHS, and brought CDM to the rest of the Federal government. He is now retired from Federal service, but CDM remains. Years later we're reading about another breach at the State Department, plus the recent OPM intrusions. CDM is not working. My last post, Continuous Diagnostic Monitoring Does Not Detect Hackers, explained that although CDM is a necessary part of a security program, it should not be the priority. CDM is at heart a "Find and Fix Flaws Faster" program. We should not prioritize closing and locking doors and windows while there are intruders in the house. Accordingly, I recommend a "Detect and Respond" strategy first and foremost. To implement that strategy, I recommend the following, three-phase approach. All phases can run concurrently.

Phase 1: Compromise Assessment: Assuming the Federal government can muster the motivation, resources, and authority, the Office of Management and Budget (OMB), or another agency such as DHS, should implement a government-wide compromise assessment. The compromise assessment involves deploying teams across government networks to perform point-in-time "hunting" missions to find, and if possible, remove, intruders. I suspect the "remove" part will be more than these teams can handle, given the scope of what I expect they will find. Nevertheless, simply finding all of the intruders, or a decent sample, should inspire additional defensive activities, and give authorities a true "score of the game." Phase 2: Improve Network Visibility: The following five points include actions to gain enhanced, enduring, network-centric visibility on Federal networks. While network-centric approaches are not a panacea, they represent one of the best balances between cost, effectiveness, and minimized disruption to business operations. 1. Accelerate the deployment of Einstein 3A, to instrument all Federal network gateways. Einstein is not the platform to solve the Federal government's network visibility problem, but given the current situation, some visibility is better than no visibility. If the inline, "intrusion prevention system" (IPS) nature of Einstein 3A is being used as an excuse for slowly deploying the platform, then the IPS capability should be disabled and the "intrusion detection system" (IDS) mode should be the default. Waiting until the end of 2016 is not acceptable. Equivalent technology should have been deployed in the late 1990s. 2. Ensure DHS and US-CERT have the authority to provide centralized monitoring of all deployed Einstein sensors. I imagine bureaucratic turf battles may have slowed Einstein deployment. "Who can see the data" is probably foremost among agency worries. DHS and US-CERT should be the home for centralized analysis of Einstein data. Monitored agencies should also be given access to the data, and DHS, US-CERT, and agencies should begin a dialogue on who should have ultimately responsibility for acting on Einstein discoveries. 3. Ensure DHS and US-CERT are appropriately staffed to operate and utilize Einstein. Collected security data is of marginal value if no one is

able to analyze, escalate, and respond to the data. DHS and US-CERT should set expectations for the amount of time that should elapse from the time of collection to the time of analysis, and staff the IR team to meet those requirements. 4. Conduct hunting operations to identify and remove threat actors already present in Federal networks. Now we arrive at the heart of the counter-intrusion operation. The purpose of improving network visibility with Einstein (for lack of an alternative at the moment) is to find intruders and eliminate them. This operation should be conducted in a coordinated manner, not in a whack-a-mole fashion that facilitates adversary persistence. This should be coordinated with the "hunt" mission in Phase 1. 5. Collect metrics on the nature of the counter-intrusion campaign and devise follow-on actions based on lessons learned. This operation will teach Federal network owners lessons about adversary campaigns and the unfortunate realities of the state of their enterprise. They must learn how to improve the speed, accuracy, and effectiveness of their defensive campaign, and how to prioritize countermeasures that have the greatest impact on the opponent. I expect they would begin considering additional detection and response technologies and processes, such as enterprise log management, host-based sweeping, modern inspection platforms with virtual execution and detonation chambers, and related approaches. Phase 3. Continuous Diagnostic Monitoring, and Related Ongoing Efforts: You may be surprised to see that I am not calling for an end to CDM. Rather, CDM should not be the focus of Federal security measures. It is important to improve Federal security through CDM practices, such that it becomes more difficult for adversaries to gain access to government computers. I am also a fan of the Trusted Internet Connection program, whereby the government is consolidating the number of gateways to the Internet. Note: I recommend anyone interested in details on this matter see my latest book, The Practice of Network Security Monitoring, especially chapter 9. In that chapter I describe how to run a network security monitoring operation, based on my experiences since the late 1990s.

https://taosecurity.blogspot.com/2015/06/my-federal-governmentsecurity-crash.html Commentary The focus of the advice, if not the technological specifics (e.g., Einstein, etc.) are what matters in this post. Note the focus on compromise assessment and hunting to find intruders already in the environment.

Notes on Self-Publishing a Book Monday, December 31, 2018 In this post I would like to share a few thoughts on self-publishing a book, in case anyone is considering that option. As I mentioned in my post on burnout, one of my goals was to publish a book on a subject other than cyber security. A friend from my Krav Maga school, Anna Wonsley, learned that I had published several books, and asked if we might collaborate on a book about stretching. The timing was right, so I agreed. I published my first book with Pearson and Addison-Wesley in 2004, and my last with No Starch in 2013. 14 years is an eternity in the publishing world, and even in the last 5 years the economics and structure of book publishing have changed quite a bit. To better understand the changes, I had dinner with one of the finest technical authors around, Michael W. Lucas. We met prior to my interest in this book, because I had wondered about publishing books on my own. MWL started in traditional publishing like me, but has since become a full-time author and independent publisher. He explained the pros and cons of going it alone, which I carefully considered. By the end of 2017, Anna and I were ready to begin work on the book. I believe our first "commits" occurred in December 2017. For this stretching book project, I knew my strengths included organization, project management, writing to express another person's message, editing, and access to a skilled lead photographer. I learned that my co-author's strengths included subject matter expertise, a willingness to be photographed for the book's many pictures, and friends who would also be willing to be photographed. None of us was very familiar with the process of transforming a raw manuscript and photos into a finished product. When I had published with

Pearson and No Starch, they took care of that process, as well as copyediting. Beyond turning manuscript and photos into a book, I also had to identify a publication platform. Early on we decided to self-publish using one of the many newer companies offering that service. We wanted a company that could get our book into Amazon, and possibly physical book stores as well. We did not want to try working with a traditional publisher, as we felt that we could manage most aspects of the publishing process ourselves, and augment with specialized help where needed. After a lot of research we chose Blurb. One of the most attractive aspects of Blurb was their expert ecosystem. We decided that we would hire one of these experts to handle the interior layout process. We contacted Jennifer Linney, who happened to be local and had experience publishing books to Amazon. We met in person, discussed the project, and agreed to move forward together. I designed the structure of the book. As a former Air Force officer, I was comfortable with the "rule of threes," and brought some recent writing experience from my abandoned PhD thesis. I designed the book to have an introduction, the main content, and a conclusion. Within the main content, the book featured an introduction and physical assessment, three main sections, and a conclusion. The three main sections consisted of a fundamental stretching routine, an advanced stretching routine, and a performance enhancement section -- something with Indian clubs, or kettlebells, or another supplement to stretching. Anna designed all of the stretching routines and provided the vast majority of the content. She decided to focus on three physical problem areas -- tight hips, shoulders/back, and hamstrings. We encouraged the reader to "reach three goals" -- open your hips, expand your shoulders, and touch your toes. Anna designed exercises that worked in a progression through the body, incorporating her expertise as a certified trainer and professional martial arts instructor. Initially we tried a process whereby she would write section drafts, and I

would edit them, all using Google Docs. This did not work as well as we had hoped, and we spent a lot of time stalled in virtual collaboration. By the spring of 2018 we decided to try meeting in person on a regular basis. Anna would explain her desired content for a section, and we would take draft photographs using iPhones to serve as placeholders and to test the feasibility of real content. We made a lot more progress using these methods, although we stalled again mid-year due to schedule conflicts. By October our text was ready enough to try taking book-ready photographs. We bought photography lights from Amazon and used my renovated basement game room as a studio. We took pictures over three sessions, with Anna and her friend Josh as subjects. I spent several days editing the photos to prepare for publication, then handed the bundled manuscript and photographs to Jennifer for a light copy-edit and layout during November. Our goal was to have the book published before the end of the year, and we met that goal. We decided to offer two versions. The first is a "collector's edition" featuring all color photographs, available exclusively via Blurb as Reach Your Goal: Collector's Edition. The second will be available at Amazon in January, and will feature black and white photographs. While we were able to set the price of the book directly via Blurb, we could basically only suggest a price to Ingram and hence to Amazon. Ingram is the distributor that feeds Amazon and physical book stores. I am curious to see how the book will appear in those retail locations, and how much it will cost readers. We tried to price it competitively with older stretching books of similar size. (Ours is 176 pages with over 200 photographs.) Without revealing too much of the economic structure, I can say that it's much cheaper to sell directly from Blurb. Their cost structure allows us to price the full color edition competitively. However, one of our goals was to provide our book through Amazon, and to keep the price reasonable we had to sell the black and white edition outside of Blurb. Overall I am very pleased with the writing process, and exceptionally happy with the book itself. The color edition is gorgeous and the black and

white version is awesome too. The only change I would have made to the writing process would have been to start the in-person collaboration from the beginning. Working together in person accelerated the transfer of ideas to paper and played to our individual strengths of Anna as subject matter expert and me as a writer. In general, I would not recommend self-publishing if you are not a strong writer. If writing is not your forte, then I highly suggest you work with a traditional publisher, or contract with an editor. I have seen too many selfpublished books that read terribly. This usually happens when the author is a subject matter expert, but has trouble expressing ideas in written form. The bottom line is that it's never been easier to make your dream of writing a book come true. There are options for everyone, and you can leverage them to create wonderful products that scale with demand and can really help your audience reach their goals! If you want to start the new year with better flexibility and fitness, consider taking a look at our book on Blurb! When the Amazon edition is available I will update this post with a link. https://taosecurity.blogspot.com/2018/12/notes-on-self-publishingbook.html Commentary I remain very proud of our book Reach Your Goal. It was completely different from my other projects, and self-publishing it was a wonderful growth experience.

Managing Burnout Friday, December 21, 2018 This is not strictly an information security post, but the topic likely affects a decent proportion of my readership. Within the last few years I experienced a profound professional "burnout." I've privately mentioned this to colleagues in the industry, and heard similar stories or requests for advice on how to handle burnout. I want to share my story in the hopes that it helps others in the security scene, either by coping with existing burnout or preparing for a possible burnout. How did burnout manifest for me? It began with FireEye's acquisition of Mandiant, almost exactly five years ago. 2013 was a big year for Mandiant, starting with the APT1 report in early 2013 and concluding with the acquisition in December. The prospect of becoming part of a Silicon Valley software company initially seemed exciting, because we would presumably have greater resources to battle intruders. Soon, however, I found myself at odds with FireEye's culture and managerial habits, and I wondered what I was doing inside such a different company. (It's important to note that the appointment of Kevin Mandia as CEO in June 2016 began a cultural and managerial shift. I give Kevin and his lieutenants credit for helping transform the company since then. Kevin's appointment was too late for me, but I applaud the work he has done over the last few years.) Starting in late 2014 and progressing in 2015, I became less interested in security. I was aggravated every time I saw the same old topics arise in social or public media. I did not see the point of continuing to debate issues which were never solved. I was demoralized and frustrated.

At this time I was also working on my PhD with King's College London. I had added this stress myself, but I felt like I could manage it. I had earned two major and two minor degrees in four years as an Air Force Academy cadet. Surely I could write a thesis! Late in 2015 I realized that I needed to balance the very cerebral art of information security with a more physical activity. I took a Krav Maga class the first week of January 2016. It was invigorating and I began a new blog, Rejoining the Tao, that month. I began to consider options outside of information security. In early 2016 my wife began considering ways to rejoin the W-2 workforce, after having stayed home with our kids for 12 years. We discussed the possibility of me leaving my W-2 job and taking a primary role with the kids. By mid-2016 she had a new job and I was open to departing FireEye. By late 2016 I also realized that I was not cut out to be a PhD candidate. Although I had written several books, I did not have the right mindset or attitude to continue writing my thesis. After two years I quit my PhD program. This was the first time I had quit anything significant in my life, and it was the right decision for me. (The Churchill "never, never, never give up" speech is fine advice when defending your nation's existence, but it's stupid advice if you're not happy with the path you're following.) In March 2017 I posted Bejtlich Moves On, where I said I was leaving FireEye. I would offer security consulting in the short term, and would open a Krav Maga school in the long-term. This was my break with the security community and I was happy to make it. I blogged on security only five more times in 2017. (Incidentally, one very public metric for my burnout experience can be seen in my blog output. In 2015 I posted 55 articles, but in 2016 I posted only 8, and slightly more, 12, in 2017. This is my 21st post of 2018.) I basically took a year off from information security. I did some limited consulting, but Mrs B paid the bills, with some support from my book royalties and consulting. This break had a very positive effect on my mental

health. I stayed aware of security developments through Twitter, but I refused to speak to reporters and did not entertain job offers. During this period I decided that I did not want to open a Krav Maga school and quit my school's instructor development program. For the second time, I had quit something I had once considered very important. I started a new project, though -- writing a book that had nothing to do with information security. I will post about it shortly, as I am finalizing the cover with the layout team this weekend! By the spring of 2018 I was able to consider returning to security. In May I blogged that I was joining Splunk, but that lasted only two months. I realized I had walked into another cultural and managerial mismatch. Near the end of that period, Seth Hall from Corelight contacted me, and by July 20th I was working there. We kept it quiet until September. I have been very happy at Corelight, finally finding an environment that matches my temperament, values, and interests. My advice to those of you who have made it this far: If you're feeling burnout now, you're not alone. It happens. We work in a stressful industry that will take everything that you can give, and then try to take more. It's healthy and beneficial to push back. If you can, take a break, even if it means only a partial break. Even if you can't take a break, consider integrating non-security activities into your lifestyle -- the more physical, the better. Security is a very cerebral activity, often performed in a sedentary manner. You have a body and taking care of it will make your mind happier too. If you're not feeling burnout now, I recommend preparing for a possible burnout in the future. In addition to the advice in the previous paragraphs, take steps now to be able to completely step away from security for a defined period. Save a proportion of your income to pay your bills when you're not working in security. I recommend at least a month, but up to six months if you can manage it.

This is good financial advice anyway, in the event you were to lose your job. This is not an emergency fund, though -- this is a planned reprieve from burnout. We are blessed in security to make above-average salaries, so I suggest saving for retirement, saving for layoffs, and saving for burnout. Finally, it's ok to talk to other people about this. This will likely be a private conversation. I don't see too many people saying "I'm burned out!" on Twitter or in a blog post. I only felt comfortable writing this post months after I returned to regular security work. I'm very interested in hearing what others have to say on this topic. Replying to my Twitter announcement for the blog post is probably the easiest step. I moderate the comments here and might not get to them in a timely manner. https://taosecurity.blogspot.com/2018/12/managing-burnout.html Commentary I spoke to many people after this post went live. Burnout is a real problem in the information security world. I hope my thoughts are helpful.

COVID-19 Phishing Tests: WRONG Thursday, March 12, 2020 Malware Jake Tweeted a poll last night which asked the following: "I have an interesting ethical quandary. Is it ethically okay to use COVID-19 themed phishing emails for assessments and user awareness training right now? Please read the thread before responding and RT for visibility. 1/" Ultimately he decided: "My gut feeling is to not use COVID-19 themed emails in assessments/training, but to TELL users to expect them, though I understand even that might discourage consumption of legitimate information, endangering public health. 6/" I responded by saying this was the right answer. Thankfully there were many people who agreed, despite the fact that voting itself was skewed towards the "yes" answer. There were an uncomfortable number of responses to the Tweet that said there's nothing wrong with red teams phishing users with COVID-19 emails. For example: "Do criminals abide by ethics? Nope. Neither should testing." "Yes. If it's in scope for the badguys [sic], it's in scope for you." "Attackers will use it. So I think it is fair game." Those are the wrong answers. As a few others outlined well in their responses, the fact that a criminal or intruder employs a tactic does not mean that it's appropriate for an offensive security team to use it too.

I could imagine several COVID-19 phishing lures that could target school districts and probably cause high double-digit click-through rates. What's the point of that? For a "community" that supposedly considers fear, uncertainty, and doubt (FUD) to be anathema, why introduce FUD via a phishing test? I've grown increasingly concerned over the past few years that there's a "cult of the offensive" that justifies its activities with the rationale that "intruders do it, so we should too." This is directly observable in the replies to Jake's Tweet. It's a thin veneer that covers bad behavior, outweighing the small benefit accrued to high-end, 1% security shops against the massive costs suffered by the vast majority of networked global organizations. This is a selfish, insular mindset that is reinforced by the echo chamber of the so-called "infosec community." This "tribe" is detached from the concerns and ethics of the larger society. It tells itself that what it is doing is right, oblivious or unconcerned with the costs imposed on the organizations they are supposedly "protecting" with their backwards actions. We need people with feet in both worlds to tell this group that their approach is not welcome in the broader human community, because the costs it imposes vastly outweigh the benefits. I've written here about ethics before, usually in connection with the only real value I saw in the CISSP -- its code of ethics. Reviewing the "code," as it appears now, shows the following: "There are only four mandatory canons in the Code. By necessity, such high-level guidance is not intended to be a substitute for the ethical judgment of the professional. Code of Ethics Preamble: The safety and welfare of society and the common good, duty to our principals, and to each other, requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior. Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons: Protect society, the common good, necessary public trust and confidence, and the infrastructure. Act honorably, honestly, justly, responsibly, and legally. Provide diligent and competent service to principals. Advance and protect the profession." This is almost worthless. The only actionable item in the "code" is the word "legally," implying that if a CISSP holder was convicted of a crime, he or she could lose their certification. Everything else is subject to interpretation. Contrast that with the USAFA Code of Conduct: "We will not lie, steal, or cheat, nor tolerate among us anyone who does." While it still requires an Honor Board to determine if a cadet has lied, stolen, cheated, or tolerated, there's much less gray in this statement of the Academy's ethics. Is it perfect? No. Is it more actionable than the CISSP's version? Absolutely. I don't have "solutions" to the ethical bankruptcy manifesting in some people practicing what they consider to be "information security." However, this post is a step towards creating red lines that those who are not already hardened in their ways can observe and integrate. Perhaps at some point we will have an actionable code of ethics that helps newcomers to the field understand how to properly act for the benefit of the human community. https://taosecurity.blogspot.com/2020/03/covid-19-phishing-testswrong.html Commentary Shortly after I published this post, my friend Aaron Higbee from Cofense

posted to LinkedIn, asking readers to take a pledge “not to conduct COVID19 phishing simulations.” I took the pledge and I was pleased to see others did too. Reference: https://www.linkedin.com/posts/cofense_awarenessnotanxietynocovid19phishingtests-activity-6648924137714659329-hG63/

When You Should Blog and When You Should Tweet Friday, March 27, 2020 I saw my like-minded, friend-that-I've-never-met Andrew Thompson Tweet a poll asking "What's your preferred Twitter style to conse other people's Twitter?". I was about to reply with the following Tweet: "If I'm struggling to figure out how to capture a thought in just 1 Tweet, that's a sign that a blog post might be appropriate. I only use a thread, and no more than 2, and hardly ever 3 (good Lord), when I know I've got nothing more to say. "1/10," "1/n," etc. are not for me." Then I realized I had something more to say, namely, other reasons blog posts are better than Tweets. For the briefest moment I considered adding a second Tweet, making, horror of horrors, a THREAD, and then I realized I would be breaking my own guidance. Here are three reasons to consider blogging over Tweeting. 1. If you find yourself trying to pack your thoughts into a 280 character limit, then you should write a blog post. You might have a good idea, and instead of expressing it properly, you're falling into the trap of letting the medium define the message, aka the PowerPoint trap. I learned this from Edward Tufte: let the message define the medium, not the other way around. 2. Twitter threads lose the elegance and readability of the English language as our ancestors created it, for our benefit. They gave us structures, like sentences, lists, indentation, paragraphs, chapters, and so on. What does Twitter provide? 280 character chunks. Sure, you can apply feeble "1/n" annotations, but you've lost all that structure and readability, and for what? 3. In the event you're writing a Tweet thread that's really worth reading,

writing it via Twitter virtually guarantees that it's lost to history. Twitter is an abomination for citation, search, and future reference. In the hierarchy of delivering content for current researchers and future generations, the hierarchy is the following, from lowest to highest: ● ● ● ● ● ●

"Transient," "bite-sized" social media, e.g., Twitter, Instagram, Facebook, etc. posts Blog posts Whitepapers Academic papers in "electronic" journals Electronic (e.g., Kindle) only formatted books Print books (that may be stand-alone works, or which may contain journal articles)

Print books are the apex communication medium because we have such references going back hundreds of years. Hundreds of years from now, I doubt the first five formats above will be easily accessible, or accessible at all. However, in a library or personal collection somewhere, printed books will endure. The bottom line is that if you think what you're writing is important enough to start a "1/n" Tweet thread, you've already demonstrated that Twitter is the wrong medium. The natural follow-on might be: what is Twitter good for? Here are my suggestions: ● ● ● ● ●

Announcing a link to another, in-depth news resource, like a news article, blog post, whitepaper, etc. Offering a comment on an in-depth news resource, or replying to another person's announcement. Asking a poll question. Asking for help on a topic. Engaging in a short exchange with another user. Long exchanges on hot topics typically devolve into a confusing mess of messages and replies, that delivery of which Twitter has never really managed to figure out.

I understand the seduction of Twitter. I use it every day. However, when it really matters, blogging is preferable, followed by the other media I listed in point 3 above. Update 0930 ET 27 Mar 2020: I forgot to mention that in extenuating circumstances, like live-Tweeting an emergency, Twitter threads on significant matters are fine because the urgency of the situation and the convenience or plain logistical limitations of the situation make Twitter indispensable. I'm less thrilled by live-Tweeting in conferences, although I'm guilty of it in the past. I'd prefer a thoughtful wrap-up post following the event, which I did a lot before Twitter became popular. https://taosecurity.blogspot.com/2020/03/when-you-should-blog-andwhen-you.html Commentary Seeing as I just wrote this a few weeks ago, I have nothing to add!

Conclusion I realize that the material in this chapter, and indeed the whole volume or volumes, are based on one person’s opinion. I hope, however, that the advice is useful. I do not expect it to be universally applicable, but perhaps at least one of the posts resonated with a problem you might be facing. Sometimes it’s enough to know that you are not alone with whatever challenge you might face. Security is a “wicked problem” but that does not mean we cannot handle it.

Afterword Now that you’ve read this volume, you might ask yourself: should I blog, or should I use Twitter and so-called “Tweet threads”? I am not a fan of “Tweet threads.” Readers could probably count on their fingers, and maybe their toes, the number of times I have written a chain of Tweets. I have a simple rule: if you can’t get your point across in a Tweet, or heaven forbid, in two, then you should be writing a blog post. I understand that Twitter is seductively easy, and its reach can be far greater than a blog, but long-form writing via a blog is superior in so many ways to Tweeting. First, if you’re writing something extensive, there’s a chance someone else will want to cite it. Citation is a weakness in the information security world, but responsible analysts and researchers will want to provide quality sources. Citing a Tweet, or a stream of Tweets, is bush-league. Citing a blog post is still not as respectable as referencing a paper or book, but it is far preferable to a link to a Tweet. Second, Twitter is a sound platform for projecting ideas, but that projection tends to be temporally limited. What you say has an impact at the time of production and consumption, but it’s not easy to work with the material at a later time. “Quality Tweets” (shudder) tend to be interspersed with commentary on everything under the sun, making it difficult for a researcher, or even the original author, to locate material of interest. Google can help find Tweets, but the fact that they are a 140/280 character entry in a vast database makes them more opaque to research. It’s far better to compile thoughts in a long form, even if it’s only a few paragraphs in a blog post. Third, delivering content via Twitter is likely to fall prey to the tendency for the medium to define the message, rather than having the message define the medium. This was a key lesson taught to me by Dr. Edward Tufte, in his amazing one-day course on Presenting Data and Information. It’s a key

problem with PowerPoint as well. My take on the first time I took his course appears in this volume, and while Tufte does not address Twitter, and in fact uses it, I believe he would concur with my warning. My advice to anyone who is new to a field and who has considered blogging: do it. Even if you keep the blog private, you are creating content that is valuable to you. I have several public blogs and several private blogs. The public ones besides TaoSecurity are all related to the martial arts: Martial History Team (martialhistoryteam.blogspot.com), Rejoining the Tao (rejoiningthetao.blogspot.com), and Sourcing Bruce Lee (sourcingbrucelee.blogspot.com). The private ones are basically note-taking sites, usually for technical matters. Many years ago I might have posted the technical material publicly, but these days I don’t care to spend the time to clean up the content to make it suitable for public consumption. The best time to blog really is when no one is watching. The higher your blog’s profile, the greater the hassle. So, just as with martial arts, I believe one of the best times to be learning is when you are a “white belt” and no one has expectations of you. You can make all the mistakes you want and the rest of the world accepts it as being part of your advertised status. Unfortunately, as one develops a reputation and some degree of expertise, the level of tolerance he or she encounters tends to decline. This may sound discouraging, and in some ways it is. However, if you believe in your content, and remember that you’re writing for yourself, it becomes easier to ignore the naysayers. You can always stay private or disable comments! If you’ve enjoyed this volume, be sure to check out the sequels, which cover more posts from TaoSecurity Blog. Thank you for taking this journey with me. "So I too will here end my story. If it is well told and to the point, that is what I myself desired; if it is poorly done and mediocre, that was the best I could do."

-- 2 Maccabees 15:37-38, The Bible, Revised Standard Version w/ Apocrypha

Books By This Author The Tao of Network Security Monitoring: Beyond Intrusion Detection Extrusion Detection: Security Monitoring for Internal Intrusions Real Digital Forensics: Computer Security and Incident Response The Practice of Network Security Monitoring: Understanding Incident Detection and Response Reach Your Goal: Stretching & Mobility Exercises for Fitness, Personal Training, & Martial Arts

About The Author Richard Bejtlich

Richard Bejtlich is Principal Security Strategist at Corelight. He was previously Chief Security Strategist at FireEye, and Mandiant's Chief Security Officer when FireEye acquired Mandiant in 2013. At General Electric, as Director of Incident Response, he built and led the 40-member

GE Computer Incident Response Team (GE-CIRT). Richard began his digital security career as a military intelligence officer in 1997 at the Air Force Computer Emergency Response Team (AFCERT), Air Force Information Warfare Center (AFIWC), and Air Intelligence Agency (AIA). Richard is a graduate of Harvard University and the United States Air Force Academy. His fourth book is "The Practice of Network Security Monitoring" (nostarch.com/nsm). He also writes for his blog (taosecurity.blogspot.com) and Twitter (@taosecurity). Richard took his first martial arts classes in judo, karate, boxing, and combatives as a cadet at the US Air Force Academy in 1990, and continued practicing several styles until 2001. He resumed training in 2016 by practicing within the Krav Maga Global system, earning Graduate 1 rank. Richard now studies Brazilian Jiu-Jitsu with Team Pedro Sauer and is the founder of Martial History Team. Richard lives with his wife Amy, their two children, two cats, and other wildlife in northern Virginia.

Version History 04 May 2020: V 1.0 15 May 2020: V 1.1; minor typo fixes