Data-driven security : analysis, visualization, and dashboards 9781118793725, 1118793722

1,277 218 24MB

English Pages 331 [353] Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Data-driven security : analysis, visualization, and dashboards
 9781118793725, 1118793722

Table of contents :
Cover
Title Page
Copyright
Contents
Introduction
Overview of the Book and Technologies
How This Book Is Organized
Who Should Read This Book
Tools You Will Need
What’s on the Website
The Journey Begins!
Chapter 1 The Journey to Data-Driven Security
A Brief History of Learning from Data
Nineteenth Century Data Analysis
Twentieth Century Data Analysis
Twenty-First Century Data Analysis
Gathering Data Analysis Skills
Domain Expertise
Programming Skills
Data Management
Statistics
Visualization (a.k.a. Communication)
Combining the Skills
Centering on a Question
Creating a Good Research Question
Exploratory Data Analysis
Summary
Recommended Reading
Chapter 2 Building Your Analytics Toolbox: A Primer on Using R and Python for Security Analysis
Why Python? Why R? And Why Both?
Why Python?
Why R?
Why Both?
Jumpstarting Your Python Analytics with Canopy
Understanding the Python Data Analysis and Visualization Ecosystem
Setting Up Your R Environment
Introducing Data Frames
Organizing Analyses
Summary
Recommended Reading
Chapter 3 Learning the “Hello World” of Security Data Analysis
Solving a Problem
Getting Data
Reading In Data
Exploring Data
Homing In on a Question
Summary
Recommended Reading
Chapter 4 Performing Exploratory Security Data Analysis
Dissecting the IP Address
Representing IP Addresses
Segmenting and Grouping IP Addresses
Locating IP Addresses
Augmenting IP Address Data
Association/Correlation, Causation, and Security Operations Center Analysts Gone Rogue
Mapping Outside the Continents
Visualizing the ZeuS Botnet
Visualizing Your Firewall Data
Summary
Recommended Reading
Chapter 5 From Maps to Regression
Simplifying Maps
How Many ZeroAccess Infections per Country?
Changing the Scope of Your Data
The Potwin Effect
Is This Weird?
Counting in Counties
Moving Down to Counties
Introducing Linear Regression
Understanding Common Pitfalls in Regression Analysis
Regression on ZeroAccess Infections
Summary
Recommended Reading
Chapter 6 Visualizing Security Data
Why Visualize?
Unraveling Visual Perception
Understanding the Components of Visual Communications
Avoiding the Third Dimension
Using Color
Putting It All Together
Communicating Distributions
Visualizing Time Series
Experiment on Your Own
Turning Your Data into a Movie Star
Summary
Recommended Reading
Chapter 7 Learning from Security Breaches
Setting Up the Research
Considerations in a Data Collection Framework
Aiming for Objective Answers
Limiting Possible Answers
Allowing “Other,” and “Unknown” Options
Avoiding Conflation and Merging the Minutiae
An Introduction to VERIS
Incident Tracking
Threat Actor
Threat Actions
Information Assets
Attributes
Discovery/Response
Impact
Victim
Indicators
Extending VERIS with Plus
Seeing VERIS in Action
Working with VCDB Data
Getting the Most Out of VERIS Data
Summary
Recommended Reading
Chapter 8 Breaking Up with Your Relational Database
Realizing the Container Has Constraints
Constrained by Schema
Constrained by Storage
Constrained by RAM
Constrained by Data
Exploring Alternative Data Stores
BerkeleyDB
Redis
Hive
MongoDB
Special Purpose Databases
Summary
Recommended Reading
Chapter 9 Demystifying Machine Learning
Detecting Malware
Developing a Machine Learning Algorithm
Validating the Algorithm
Implementing the Algorithm
Benefiting from Machine Learning
Answering Questions with Machine Learning
Measuring Good Performance
Selecting Features
Validating Your Model
Specific Learning Methods
Supervised
Unsupervised
Hands On: Clustering Breach Data
Multidimensional Scaling on Victim Industries
Hierarchical Clustering on Victim Industries
Summary
Recommended Reading
Chapter 10 Designing Effective Security Dashboards
What Is a Dashboard, Anyway?
A Dashboard Is Not an Automobile
A Dashboard Is Not a Report
A Dashboard Is Not a Moving Van
A Dashboard Is Not an Art Show
Communicating and Managing “Security” through Dashboards
Lending a Hand to Handlers
Raising Dashboard Awareness
The Devil (and Incident Response Delays) Is in the Details
Projecting “Security”
Summary
Recommended Reading
Chapter 11 Building Interactive Security Visualizations
Moving from Static to Interactive
Interaction for Augmentation
Interaction for Exploration
Interaction for Illumination
Developing Interactive Visualizations
Building Interactive Dashboards with Tableau
Building Browser-Based Visualizations with D3
Summary
Recommended Reading
Chapter 12 Moving Toward Data-Driven Security
Moving Yourself toward Data-Driven Security
The Hacker
The Statistician
The Security Domain Expert
The Danger Zone
Moving Your Organization toward Data-Driven Security
Ask Questions That Have Objective Answers
Find and Collect Relevant Data
Learn through Iteration
Find Statistics
Summary
Recommended Reading
Appendix A Resources and Tools
Appendix B References
Index

Citation preview

ffirs.indd

10:45:49:AM/01/08/2014

Page i

Data-Driven Security: Analysis, Visualization and Dashboards Published by John Wiley & Sons, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www.wiley.com Copyright © 2014 by John Wiley & Sons, Inc., Indianapolis, Indiana Published by John Wiley & Sons, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 978-1-118-79372-5 ISBN: 978-1-118-79366-4 (ebk) ISBN: 9789-1-118-79382-4 (ebk) Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com. Library of Congress Control Number: 2013954100 Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

ffirs.indd

10:45:49:AM/01/08/2014

Page ii

About the Authors Jay Jacobs has over 15 years of experience within IT and information security with a focus on cryptography, risk, and data analysis. As a Senior Data Analyst on the Verizon RISK team, he is a co-author on their annual Data Breach Investigation Report and spends much of his time analyzing and visualizing security-related data. Jay is a co-founder of the Society of Information Risk Analysts and currently serves on the organization’s board of directors. He is an active blogger, a frequent speaker, a co-host on the Risk Science podcast and was co-chair of the 2014 Metricon security metrics/analytics conference.  Jay can be found on twitter as @jayjacobs. He holds a bachelor’s degree in technology and management from Concordia University in Saint Paul, Minnesota, and a graduate certificate in Applied Statistics from Penn State. Bob Rudis has over 20 years of experience using data to help defend global Fortune 100 companies. As Director of Enterprise Information Security & IT Risk Management at Liberty Mutual, he oversees their partnership with the regional, multi-sector Advanced Cyber Security Center on large scale security analytics initiatives. Bob is a serial tweeter (@hrbrmstr), avid blogger (rud.is), author, speaker, and regular contributor to the open source community (github.com/hrbrmstr). He currently serves on the board of directors for the Society of Information Risk Analysts (SIRA), is on the editorial board of the SANS Securing The Human program, and was co-chair of the 2014 Metricon security metrics/analytics conference. He holds a bachelor’s degree in computer science from the University of Scranton.

About the Technical Editor Russell Thomas is a Security Data Scientist at Zions Bancorporation and a PhD candidate in Computational Social Science at George Mason University. He has over 30 years of computer industry experience in technical, management, and consulting roles. Mr. Thomas is a long-time community member of Securitymetrics. org and a founding member of the Society of Information Risk Analysts (SIRA). He blogs at http:// exploringpossibilityspace.blogspot.com/ and is @MrMeritology on Twitter.

ffirs.indd

10:45:49:AM/01/08/2014

Page iii

Credits Executive Editor Carol Long

Business Manager Amy Knies

Senior Project Editor Kevin Kent

Vice President and Executive Group Publisher Richard Swadley

Technical Editor Russell Thomas

Associate Publisher Jim Minatel

Senior Production Editor Kathleen Wisor

Project Coordinator, Cover Katie Crocker

Copy Editor Kezia Endsley

Proofreader Nancy Carrasco

Editorial Manager Mary Beth Wakefield

Indexer Johnna VanHoose Dinse

Freelancer Editorial Manager Rosemarie Graham

Cover Image Bob Rudis

Associate Director of Marketing David Mayhew

Cover Designer Ryan Sneed

Marketing Manager Ashley Zurcher

ffirs.indd

10:45:49:AM/01/08/2014

Page v

Acknowledgments While our names are on the cover, this book represents a good deal of work by a good number of (good) people. A huge thank you goes out to Russell Thomas, our technical editor. His meticulous attention to detail has not only made this book better, but it’s also saved us from a few embarrassing mistakes. Thank you for those of you who have taken the time to prepare and share data for this project: Symantec, AlienVault, Stephen Patton, and David Severski. Thank you to Wade Baker for his contagious passion, Chris Porter for his contacts, and the RISK team at Verizon for their work and contribution of VERIS to the community. Thank you to the good folks at Wiley— especially Carol Long, Kevin Kent, and Kezia Endsley—who helped shape this work and kept us on track and motivated. Thank you also to the many people who have contributed by responding to our emails, talking over ideas, and providing your feedback. Finally, thanks to the many vibrant and active communities around R, Python, data visualizations, and information security; hopefully, we can continue to blur the lines between those communities.

Jay Jacobs First and foremost, I would like to thank my parents. My father gave me his passion for learning and the confidence to try everything. My mother gave me her unwavering support, even when I was busy discovering which paths not to take. Thank you for providing a good environment to grow and learn. I would also like to thank my wife, Ally. She is my best friend, loudest critic, and biggest fan. This work would not be possible without her love, support, and encouragement. And finally, I wish to thank my children for their patience: I’m ready for that game now.

Bob Rudis This book would not have been possible without the love, support, and nigh-unending patience through many a lost weekend of my truly amazing wife, Mary, and our three still-at-home children, Victoria, Jarrod, and Ian. Thank you to Alexandre Pinto, Thomas Nudd, and Bill Pelletier for well-timed (though you probably didn’t know it) messages of encouragement and inspiration. A special thank you to the open source community and reproducible research and open data movements who are behind most of the tools and practices in this text. Thank you, as well, to Josh Corman who came up with the spiffy title for the tome. And, a final thank you—in recipe form—to those that requested one with the book: Pan Fried Gnocchi with Basil Pesto ●

2 C fresh Marseille basil



1/2 C fresh grated Romano cheese



1/2 C + 2 tbsp extra virgin olive oil



1/4 C pine nuts

ffirs.indd

10:45:49:AM/01/08/2014

Page vii

viii

ACKNOWLEDGMENTS



4 garlic scapes



Himalayan sea salt; cracked pepper



1 lb. gnocchi (fresh or pre-made/vacuum sealed; gnocchi should be slightly dried if fresh)

Pulse (add in order): nuts, scapes, basil, cheese. Stream in 1/2 cup of olive oil, pulsing and scraping as needed until creamy, adding salt and pepper to taste. Set aside. Heat a heavy-bottomed pan over medium-high heat; add remaining olive oil. When hot, add gnocchi, but don’t crowd the pan or go above one layer. Let brown and crisp on one side for 3–4 minutes then flip and do the same on the other side for 2–3 minutes. Remove gnocchi from pan, toss with pesto, drizzle with saba and serve. Makes enough for 3–4 people.

ffirs.indd

10:45:49:AM/01/08/2014

Page viii

ix

Contents Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv $IBQUFS t 5IF+PVSOFZUP%BUB%SJWFO4FDVSJUZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 A Brief History of Learning from Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Nineteenth Century Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Twentieth Century Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Twenty-First Century Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Gathering Data Analysis Skills. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Domain Expertise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Programming Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Visualization (a.k.a. Communication). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Combining the Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Centering on a Question. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Creating a Good Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Exploratory Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

$IBQUFS t #VJMEJOH:PVS"OBMZUJDT5PPMCPY"1SJNFSPO 6TJOH3BOE1ZUIPOGPS4FDVSJUZ"OBMZTJT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Why Python? Why R? And Why Both?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Why Python? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Why R? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Why Both?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Jumpstarting Your Python Analytics with Canopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Understanding the Python Data Analysis and Visualization Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Setting Up Your R Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Introducing Data Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Organizing Analyses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

$IBQUFS t -FBSOJOHUIFi)FMMP8PSMEwPG4FDVSJUZ%BUB"OBMZTJT. . . . . . . . . . . . . . . . . . . . . 39 Solving a Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Getting Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Reading In Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Exploring Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

ftoc.indd

6:26:46:PM/01/08/2014 Page ix

x

CONTENTS

Homing In on a Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

$IBQUFS t 1FSGPSNJOH&YQMPSBUPSZ4FDVSJUZ%BUB"OBMZTJT . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Dissecting the IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Representing IP Addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Segmenting and Grouping IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Locating IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Augmenting IP Address Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Association/Correlation, Causation, and Security Operations Center Analysts Gone Rogue . . . . . . . . . 86 Mapping Outside the Continents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Visualizing the ZeuS Botnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Visualizing Your Firewall Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

$IBQUFS t 'SPN.BQTUP3FHSFTTJPO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Simplifying Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 How Many ZeroAccess Infections per Country? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Changing the Scope of Your Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 The Potwin Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Is This Weird? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Counting in Counties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Moving Down to Counties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Introducing Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Understanding Common Pitfalls in Regression Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Regression on ZeroAccess Infections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

$IBQUFS t 7JTVBMJ[JOH4FDVSJUZ%BUB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Why Visualize? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Unraveling Visual Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Understanding the Components of Visual Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Avoiding the Third Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Using Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Putting It All Together. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Communicating Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Visualizing Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Experiment on Your Own. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Turning Your Data into a Movie Star . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

ftoc.indd

6:26:46:PM/01/08/2014 Page x

xi

$IBQUFS t -FBSOJOHGSPN4FDVSJUZ#SFBDIFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Setting Up the Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Considerations in a Data Collection Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Aiming for Objective Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Limiting Possible Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Allowing “Other,” and “Unknown” Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Avoiding Conflation and Merging the Minutiae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 An Introduction to VERIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Incident Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Threat Actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Threat Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Information Assets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Discovery/Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Victim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Extending VERIS with Plus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Seeing VERIS in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Working with VCDB Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Getting the Most Out of VERIS Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

$IBQUFS t #SFBLJOH6QXJUI:PVS3FMBUJPOBM%BUBCBTF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Realizing the Container Has Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Constrained by Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Constrained by Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Constrained by RAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Constrained by Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Exploring Alternative Data Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 BerkeleyDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Redis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Hive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 MongoDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Special Purpose Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

$IBQUFS t %FNZTUJGZJOH.BDIJOF-FBSOJOH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Detecting Malware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Developing a Machine Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Validating the Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Implementing the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

ftoc.indd

6:26:46:PM/01/08/2014 Page xi

xii

CONTENTS

Benefiting from Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Answering Questions with Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Measuring Good Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Selecting Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Validating Your Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Specific Learning Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Supervised . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Unsupervised. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Hands On: Clustering Breach Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Multidimensional Scaling on Victim Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Hierarchical Clustering on Victim Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

$IBQUFS t %FTJHOJOH&õFDUJWF4FDVSJUZ%BTICPBSET. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 What Is a Dashboard, Anyway?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 A Dashboard Is Not an Automobile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 A Dashboard Is Not a Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 A Dashboard Is Not a Moving Van . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 A Dashboard Is Not an Art Show. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Communicating and Managing “Security” through Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Lending a Hand to Handlers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Raising Dashboard Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 The Devil (and Incident Response Delays) Is in the Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Projecting “Security” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

$IBQUFS t #VJMEJOH*OUFSBDUJWF4FDVSJUZ7JTVBMJ[BUJPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Moving from Static to Interactive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Interaction for Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Interaction for Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Interaction for Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Developing Interactive Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Building Interactive Dashboards with Tableau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Building Browser-Based Visualizations with D3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295

$IBQUFS t .PWJOH5PXBSE%BUB%SJWFO4FDVSJUZ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 Moving Yourself toward Data-Driven Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 The Hacker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 The Statistician . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 The Security Domain Expert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 The Danger Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303

ftoc.indd

6:26:46:PM/01/08/2014 Page xii

xiii

Moving Your Organization toward Data-Driven Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Ask Questions That Have Objective Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Find and Collect Relevant Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Learn through Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Find Statistics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Recommended Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

"QQFOEJY" t 3FTPVSDFTBOE5PPMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 "QQFOEJY# t 3FGFSFODFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 *OEFY t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

ftoc.indd

6:26:46:PM/01/08/2014 Page xiii

flast.indd

10:35:25:AM/01/08/2014 Page xiv

Introduction

“It’s a dangerous business, Frodo, going out your door. You step onto the road, and if you don’t keep your feet, there’s no knowing where you might be swept off to.” Bilbo Baggins, The Fellowship of the Ring

flast.indd

10:35:25:AM/01/08/2014

Page xv

xvi

INTRODUCTION

In recent years, cybersecurity has taken center stage in the personal and professional lives of the majority of the global population. Data breaches are a daily occurrence, and intelligent adversaries target consumers, corporations, and governments with practically no fear of being detected or facing consequences for their actions. This is all occurring while the systems, networks, and applications that comprise the backbones of commerce and critical infrastructure are growing ever more complex, interconnected, and unwieldy. Defenses built solely on the elements of faith-based security—unaided intuition and “best” practices— are no longer sufficient to protect us. The era of the security shaman is rapidly fading, and it’s time to adopt the proven tools and techniques being used in other disciplines to take an evolutionary step into Data-Driven Security.

Overview of the Book and Technologies Data-Driven Security: Analysis, Visualization and Dashboards has been designed to take you on a journey into the world of security data science. The start of the journey looks a bit like the word cloud shown in Figure 1, which was created from the text in the chapters of this book. You have a great deal of information available to you, and may be able to pick out a signal or two within the somewhat hazy noise on your own. However, it’s like looking for a needle in a haystack without a magnet.

Figure 1 You’ll have much more success identifying what matters (see Figure 2) if you apply the right tools in the most appropriate way possible.

flast.indd

10:35:25:AM/01/08/2014

Page xvi

How This Book Is Organized

Figure 2

This book focuses on Python and R as the foundational data analysis tools, but also introduces the design and creation of modern static and interactive visualizations with HTML5, CSS, and JavaScript. It also provides background on and security use cases for modern NoSQL databases.

How This Book Is Organized Rather than have you gorge at an all-you-can-eat buffet, the chapters are more like tapas—each with their own distinct flavor profiles and textures. Like the word tapas itself suggests, each chapter covers a different foundational topic within security data science and provides plenty of pointers for further study. Chapter 1 lays the foundation for the journey and provides examples of how other disciplines have evolved into data-driven practices. It also provides an overview of the skills a security data scientist needs. Chapters 2, 3, and 4 dive right into the tools, technologies, and basic techniques that should be part of every security data scientists’ toolbox. You’ll work with AlienVault’s IP Reputation database (one of the most thorough sources of malicious nodes publicly available) and take a macro look at the ZeuS and ZeroAccess botnets. We introduce the analytical side of Python in Chapters 2 and 3. Then we thrust you into the world of statistical analysis software with a major focus on the R language in the remainder of the book. Unlike traditional introductory texts in R (or statistics in general), we use security data throughout the book to help make the concepts as real and practical as possible for the information security professional. Chapter 5 introduces some techniques for creating maps and introduces some core statistical concepts, along with a lesson or two about extraterrestrial visitors.

flast.indd

10:35:25:AM/01/08/2014

Page xvii

xvii

xviii

HOW THIS BOOK IS ORGANIZED

Chapter 6 delves into the biological and cognitive science foundations of visual communication (data visualization) and even shows you how to animate your security data. This lays a foundation for learning how to analyze and visualize security breaches in Chapter 7, where you’ll also have an opportunity to work with real incident data. Chapter 8 covers modern database concepts with new tricks for traditional database deployments and new tools with a range of NoSQL solutions discussed. You’ll also get tips on how to answer the question, “Have we seen this IP address on our network?” Chapter 9 introduces you to the exciting and relatively new world of machine learning. You’ll learn about the core concepts and explore a handful of machine-learning techniques and develop a new appreciation for how algorithms can pick up patterns that your intuition might never recognize. Chapters 10 and 11 give you practical advice and techniques for building effective visualizations that will both communicate and (hopefully) impress your consumers. You’ll use everything from Microsoft Excel to state of the art tools and libraries, and be able to translate what you’ve learned outside of security. Visualization concepts are made even more tangible through “makeovers” of security dashboards that many of you may be familiar with. Finally, we show you how to apply what you’ve learned at both a personal and organizational level in Chapter 12.

Who Should Read This Book We wrote this book because we both thoroughly enjoy working with data and wholeheartedly believe that we can make significant progress in improving cybersecurity if we take the time to understand how to ask the right questions, perform accurate and reproducible analyses on data, and communicate the results in the most compelling ways possible. Readers will get the most out of this book if they come to it with some security domain experience and the ability to do basic coding or scripting. If you are already familiar with Python, you can skip the introduction to it in Chapter 2 and can skim through much of Chapter 3. We level the field a bit by introducing and focusing on R, but you would do well to make your way through all the examples and listings that use R throughout the book, as it is an excellent language for modern data science. If you are new to programming, Chapters 2, 3, and 4 will provide enough of an immersive experience to help you see if it’s right for you. We place emphasis on statistical and machine learning across many chapters and do not recommend skipping any of that content. However, you can hold off on Chapter 9 (which discusses machine learning) until the very end, as it will not detract significantly from the flow of the book. If you know databases well, you need only review the use cases in Chapter 8 to ensure you’re thinking about all the ways you can use modern and specialized databases in security use cases. Unlike many books that discuss dashboards, the only requirements for Chapter 10 are Microsoft Excel or OpenOffice Calc, as we made no assumptions about the types of tools and restrictions you have to work with in your organization. You can also save Chapter 11 for future reading if you have no desire to build interactive visualizations. In short, though we are writing to Information Technology and Information Security professionals, students, consultants, and anyone looking for more about the how-to of analyzing data and making it understandable for protecting networks will find what they need in this book.

flast.indd

10:35:25:AM/01/08/2014

Page xviii

The Journey Begins

Tools You Will Need Everything you need to follow along with the exercises is freely available: ●

The R project (http://www.r-project.org)—Most of the examples are written in R, and with the wide range of community developed packages like ggplot2 (http://ggplot2.org) almost anything is possible.



RStudio (http://www.rstudio.com/)—It will be much easier to get to know R and run the examples if you use the RStudio IDE.



Python (http://www.python.org)—A few of the examples leverage Python and with add-on packages like pandas (http://pandas.pydata.org) makes this a very powerful platform.



Sublime Text (http://www.sublimetext.com/)—This, or another robust text editor, will come in very handy especially when working with HTML/CSS/JavaScript examples.



D3.js (http://d3js.org/)—Grabbing a copy of D3 and giving the basics a quick read through ahead of Chapter 11 will help you work through the examples in that chapter a bit faster.



Git (http://git-scm.com/)—You’ll be asked to use git to download data at various points in the book, so installing it now will save you some time later.



MongoDB (http://www.mongodb.org/)—MongoDB is used in Chapter 8, so getting it set up early will make those examples less cumbersome.



Redis (http://redis.io/)—This, too, is used in some examples in Chapter 8.



Tableau Public (http://www.tableausoftware.com/)—If you intend to work with the survey data in Chapter 11, having a copy of Tableau Public will be useful.

Additionally, all of the code, examples, and data used in this book are available through the companion website for this book (www.wiley.com/go/datadrivensecurity). We recommend using Linux or Mac OS, but all of the examples should work fine on modern flavors of Microsoft Windows as well.

What’s on the Website As mentioned earlier, you’ll want to check out the companion website www.wiley.com/go/ datadrivensecurity for the book, which has the full source code for all code listings, the data files used in the examples, and any supporting documents (such as Microsoft Excel files).

The Journey Begins! You have everything you need to start down the path to Data-Driven Security. We hope your journey will be filled with new insights and discoveries and are confident you’ll be able to improve your security posture if you successfully apply the principles you’re about to learn.

flast.indd

10:35:25:AM/01/08/2014 Page xix

xix

1 The Journey to Data-Driven Security “It ain’t so much the things we don’t know that get us into trouble. It’s the things we know that just ain’t so.” Josh Billings, Humorist

c01.indd

01:13:33:PM 01/06/2014 Page 1

2

THE JOURNEY TO DATADRIVEN SECURITY

This book isn’t really about data analysis and visualization. Yes, almost every section is focused on those topics, but being able to perform good data analysis and produce informative visualizations is just a means to an end. You never (okay, rarely) analyze data for the sheer joy of analyzing data. You analyze data and create visualizations to gain new perspectives, to find relationships you didn’t know existed, or to simply discover new information. In short, you do data analysis and visualizations to learn, and that is what this book is about. You want to learn how your information systems are functioning, or more importantly how they are failing and what you can do to fix them. The cyber world is just too large, has too many components, and has grown far too complex to simply rely on intuition. Only by augmenting and supporting your natural intuition with the science of data analysis will you be able to maintain and protect an ever-growing and increasingly complex infrastructure. We are not advocating replacing people with algorithms; we are advocating arming people with algorithms so that they can learn more and do a better job. The data contains information, and you can learn better with the information in the data than without it. This book focuses on using real data—the types of data you have probably come across in your work. But rather than focus on huge discoveries in the data, this book focuses more on the process and less on the result. As a result of that decision, the use cases are intended to be exemplary and introductory rather than knock-your-socks-off cool. The goal here is to teach you new ways of looking at and learning from data. Therefore, the analysis is intended to be new ground in terms of technique, not necessarily in conclusion.

A Brief History of Learning from Data One of the best ways of appreciating the power of statistical data analysis and visualization is to look back in history to a time when these methods were first put to use. The following cases provide a vivid picture of “before” versus “after,” demonstrating the dramatic benefits of the then-new methods.

Nineteenth Century Data Analysis Prior to the twentieth century, the use of data and statistics was still relatively undeveloped. Although great strides were made in the eighteenth century, much of the scientific research of the day used basic descriptive statistics as evidence for the validity of the hypothesis. The inability to draw clear conclusions from noisy data (and almost all real data is more or less noisy) made much of the scientific debates more about opinions of the data than the data itself. One such fierce debate1 in the nineteenth century was between two medical professionals in which they debated (both with data) the cause of cholera, a bacterial infection that was often fatal. The cholera outbreak in London in 1849 was especially brutal, claiming more than 14,000 lives in a single year. The cause of the illness was unknown at that time and two competing theories from two researchers emerged. Dr. William Farr, a well-respected and established epidemiologist, argued that cholera was caused by air pollution created by decomposing and unsanitary matter (officially called the miasma theory). Dr. John Snow, also a successful epidemiologist who was not as widely known as Farr, put forth the theory that cholera was spread by consuming water that was contaminated by a “special animal poison” (this was prior to the discovery of bacteria and germs). The two debated for years. Farr published the “Report on the Mortality of Cholera in England 1848–49” in 1852, in which he included a table of data with eight possible explanatory variables collected from the 38 registration districts of London. 1

c01.indd

And worthy of a bona fide Hollywood plot as well. See http://snowthemovie.com/

01:13:33:PM 01/06/2014 Page 2

A Brief History of Learning from Data

In the paper, Farr presented some relatively simple (by today’s standards) statistics and established a relationship between the average elevation of the district and cholera deaths (lower areas had more deaths). Although there was also a relationship between cholera deaths and the source of drinking water (another one of the eight variables he gathered), he concluded that it was not nearly as significant as the elevation. Farr’s theory had data and logic and was accepted by his peers. It was adopted as fact of the day. Dr. John Snow was passionate and vocal about his disbelief in Farr’s theory and relentless in proving his own. It’s said he even collected data by going door to door during the cholera outbreak in the Soho district of 1854. It was from that outbreak and his collected data that he made his now famous map in Figure 1-1. The hand-drawn map of the Soho district included little tick marks at the addresses where cholera deaths were reported. Overlaying the location of water pumps where residents got their drinking water showed a rather obvious clustering around the water pump on Broad Street. With his map and his passionate pleas, the city did allow the pump handle to be removed and the epidemic in that region subsided. However, this wasn’t enough to convince his critics. The cause of cholera was heavily debated even beyond John Snow’s death in 1858. The cholera debate included data and visualization techniques (long before computers), yet neither had been able to convince the opposition. The debate between Snow and Farr was re-examined in 2003 when statisticians in the UK evaluated the data Farr published in 1852 with modern methods. They found that the data Farr pointed to as proof of an airborne cause actually supported Snow’s position. They concluded that if modern statistical methods were available to Farr, the data he collected would have changed his conclusion. The good news of course, is that these statistical methods are available today to you.

Twentieth Century Data Analysis A few years before Farr and Snow debated cholera, an agricultural research station north of London at Rothamsted began conducting experiments on the effects of fertilizer on crop yield. They spent decades conducting experiments and collecting data on various aspects such as crop yield, soil measurements, and weather variables. Following a modern-day logging approach, they gathered the data and diligently stored it, but they were unable to extract the full value from it. In 1919 they hired a brilliant young statistician named Ronald Aylmer Fisher to pore through more than 70 years of data and help them understand it. Fisher quickly ran into a challenge with the data being confounded, and he found it difficult to isolate the effect of the fertilizer from other effects, such as weather or soil quality. This challenge would lead Fisher toward discoveries that would forever change not just the world of statistics, but almost every scientific field in the twentieth century. What Fisher discovered (among many revolutionary contributions to statistics) is that if an experiment was designed correctly, the influence of various effects could not just be separated, but also could be measured and their influence calculated. With a properly designed experiment, he was able to isolate the effects of weather, soil quality, and other factors so he could compare the effects of various fertilizer mixtures. And this work was not limited to agriculture; the same techniques Fisher developed at Rothamsted are still used widely today in everything from medical trials to archaeology dig sites. Fisher’s work, and the work of his peers, helped revolutionize science in the twentieth century. No longer could scientists simply collect and present their data as evidence of their claim as they had in the eighteenth century. They now had the tools to design robust experiments and the techniques to model how the variables affected their experiment and observations.

c01.indd

05:3:58:PM 01/08/2014

Page 3

3

4

THE JOURNEY TO DATADRIVEN SECURITY

FIGURE 1-1 Hand-drawn map of the areas affected by cholera At this point, the world of science included statistical models. Much of the statistical and science education focused on developing and testing these models and the assumptions behind them. Nearly every statistical problem started with the question—“What’s the model?”—and ended with the model populated to allow description and even prediction using the model. This represented a huge leap forward and enabled research never before possible. If it weren’t for computers, the world would probably still consider these techniques to be modern. But computers are ubiquitous and they have enabled a whole new approach to data analysis that was both impossible and unfathomable prior to their development.

Twenty-First Century Data Analysis It’s difficult to pull out any single person or event that captures where data analysis is today like Farr and Fisher captured the previous stages of data analysis. The first glimpse at what was on the horizon came

c01.indd

05:3:58:PM 01/08/2014

Page 4

Gathering Data Analysis Skills

from John Tukey, who wrote in 1962 that data analysis should be thought of as different from statistics (although analysis leveraged statistics). He stated that data analysis must draw from science more than mathematics (can you see the term “data science” in there?). Tukey was not only an accomplished statistician, having contributed numerous procedures and techniques to the field, but he was also an early proponent of visualization techniques for the purpose of describing and exploring the data. You will come back to some of Tukey’s work later in this chapter. Let’s jump ahead to a paper written in 2001 by Leo Breiman, a statistician who focused on machine learning algorithms (which are discussed in Chapter 9). In the paper he describes a new culture of data analysis that does not focus on defining a data model of nature but instead derives an algorithmic model from nature. This new culture has evolved within computer science and engineering largely outside (or perhaps alongside) traditional statistics. New approaches are born from the practical problems created by the information age, which created large quantities of complex and noisy data. The revolutionary idea that Breiman outlined in this paper is that models should be judged on their predictive accuracy instead of validating the model with traditional statistical tests (which are not without value by the way). At face value you may think of testing “predictive accuracy” by gathering data today and determining how it predicts the world of tomorrow, but that’s not what the idea is about. The idea is about splitting the data of today into two data sets, using the first data set to generate (or “train”) an algorithm and then validating (or “test”) its predictive accuracy on the second data set. To increase the power of this approach, you can iterate through this process multiple times, splitting the data into various training and test sets, generating and validating as you go. This approach is not well suited to small data sets, but works remarkably well with modern data sets. There are several main differences between data analysis in the modern information age and the agricultural fields of Rothamsted. First, there is a large difference in the available sample size. “Classic” statistical techniques were largely limited by what the computers of the day could handle (“computers” were the people hired to “compute” all day long). With generally smaller samples, generating a training and test was impractical. However, modern environments are recording hundreds of variables generated across thousands of systems. Large sample sizes are the norm, not the exception. Second, for many environments and industries, a properly designed experiment is unlikely if not completely impossible. You cannot divide your networks into control and test groups, nor would you want to test the efficacy of a web application firewall by only protecting a portion of a critical application. One effect of these environmental limits is a much higher noise-to-signal ratio in the data. The techniques of machine learning (and the related field of data mining) have evolved with the challenges of modern data in mind. Finally, knowledge of statistics is just one skill of many that contributes to successful data analysis in the twenty-first century. With that in mind, the next section spends some time looking at the various skills and attributes that support a good data analysis.

Gathering Data Analysis Skills We know there is a natural allure to data science and everyone wants to achieve that sexy mystique surrounding security data analysis. Although we have focused on this concept of data analysis so far, it takes more than just analytic skills to create the mystique that everyone is seeking. You need to combine statistics and data analysis with visualization techniques, and then leverage the computing power and mix with a healthy dose of domain (information security) knowledge. All of this begins not with products or tools but with your own skills and abilities.

c01.indd

05:3:58:PM 01/08/2014

Page 5

5

6

THE JOURNEY TO DATADRIVEN SECURITY

Before getting to the skills, there are a couple underlying personality traits we see in data analysts that we want to discuss: curiosity and communication. Working with data can at times be a bit like an archeological dig—spending hour after hour with small tools in the hope of uncovering even the tiniest of insights. So it is with data analysis—pearls of wisdom are nestled deep within data just waiting to be discovered and presented to an eagerly awaiting audience. It is only with that sense of wonder and curiosity that the hours spent cleaning and preparing data are not just tolerable, but somehow exciting and worth every moment. Because there is that moment, when you’re able to turn a light on in an otherwise dark room, when you can describe some phenomenon or explain some pattern, when it all becomes worth it. That’s what you’re after. You are uncovering those tiny moments of enlightenment hidden in plain sight if you know where to look. Once you turn that light on, you have to bring others into the room for the discovery; otherwise, you will have constructed a house that nobody lives in. It’s not enough to point at your work and say, “see!” You have to step back and think of the best way to communicate your discovery. The complexity present in the systems and the analysis makes it difficult to convey the results in a way that everyone will understand what you have discovered. Often times it takes a combination of words, numbers, and pictures to communicate the data’s insights. Even then, some people will take away nothing, and others will take away too much. But there is still a need to condense this complexity into a paragraph, table, or graphic. Although we could spend an entire book creating an exhaustive list of skills needed to be a good security data scientist, this chapter covers the following skills/domains that a data scientist will benefit from knowing within information security: ●

Domain expertise—Setting and maintaining a purpose to the analysis



Data management—Being able to prepare, store, and maintain data



Programming—The glue that connects data to analysis



Statistics—To learn from the data



Visualization—Communicating the results effectively

It might be easy to label any one of these skills as the most important, but in reality, the whole is greater than the sum of its parts. Each of these contributes a significant and important piece to the workings of security data science.

Domain Expertise The fact that a data scientist needs domain expertise should go without saying and it may seem obvious, but data analysis is only meaningful when performed with a higher purpose in mind. It’s your experience with information security that will guide the direction of the analysis, provide context to the data, and help apply meaning to the results. In other words, domain expertise is beneficial in the beginning, middle, and end of all your data analysis efforts.

And Why Expertise Shouldn’t Get in the Way We are probably preaching to the choir here. If you are reading this book, it is probably safe to assume that you have domain expertise and see value in moving toward a data-driven approach in information security. Therefore, rather than spend the effort discussing the benefits of domain expertise in data analysis, this

c01.indd

05:3:58:PM 01/08/2014

Page 6

Gathering Data Analysis Skills

section covers some objections you might encounter as other domain experts (or skeptical leadership) are brought into the data analysis effort. People are smarter than models. There are those who hold the opinion that people will always outperform algorithms (or statistics, or models) and there is some truth to this. Teaching a machine, for example, to catch a fly ball is remarkably challenging. As Kahneman and Klein point out in their 2009 paper titled Conditions for Intuitive Expertise: a Failure to Disagree, however, determining when people will outperform algorithms is heavily dependent on the environment of the task. If the environment is complex and feedback is delayed or ambiguous, algorithms will generally and relatively consistently outperform human judgment. So, the question then becomes, how complex is the security of the information systems and how clear is the feedback? When you make a change or add a security control, how much feedback do you receive on how well it is actually protecting the information asset? The result is that information security occurs in a very complex environment, but that doesn’t mean you put all your eggs in the algorithm basket. What it does mean is that you should have some healthy skepticism about any approach that relies purely on human judgment, and you should seek ways to augment and support that expertise. That’s not to compare algorithms to human judgment. It’s not wise to set up an either-or choice. You do, however, want to compare human judgment combined with algorithms and data analysis against human judgment alone. You do not want to remove the human element, but you should be skeptical of unsupported opinion. In a complex environment, it is the combination of human intuition and data analysis that will produce the best results and create the best opportunity for learning and securing the infrastructure. It’s just lying with statistics. This expresses a general distrust in statistics and data analysis, which are often abused and misused (and in some cases flat out made up) for the sake of serving some ulterior motive. In a way, this distrust is grounded in a collective knowledge of just how easy it is to socialengineer people. However, you are in a different situation since your motive is to learn from the data. You are sitting on mounds of data that hold information and patterns just waiting to be discovered. Not leveraging data analysis because statistics are misused is like not driving a car because they are sometimes used as get-away vehicles. You need to be comfortable with adding statistics to your information security toolkit. This is not to say that data analysis is infallible. There may be times when the analysis provides the wrong answer, perhaps through poor data collection, under-trained analysts, a mistake in the process, or simply using Excel (couldn’t resist). But what you should see is simply fewer mistakes when you apply the rigor of data analysis combined with your expertise. Again, the key is combining data analysis and expertise. This ain’t rocket science. This statement has two insinuations. First, it says that whatever the problem is you’re trying to solve, you should be able to solve it with common sense. But this concern goes back to the first point, which is thinking that people outperform algorithms consistently and a group of people around a conference table looking at a complex environment can solve the (complex) problem without the need for data analysis. But as we discussed, you should pull a chair up to the conference table for the data analysis because you are generally better off with it than without it. The second implication of the statement is that data analysis is too complicated and will cost too much (in time, money, or resources). This view is simply misinformed and the objection is more likely to be a concern about an uncomfortable change in practices than a concern about time spent with data analysis. Many of the tools are open source (if the organization is averse to open source, there are plenty of commercial solutions out there as well) and the only real commitment is in the time to learn some of the basic techniques and methods in this book. The actual analysis itself can be fairly quick, and with the right combination of tools and experience, it can be done in real time.

c01.indd

05:3:58:PM 01/08/2014

Page 7

7

8

THE JOURNEY TO DATADRIVEN SECURITY

We don’t have the data. An alternate form of this objection is saying that we don’t have actuarialquality data (which is more prevalent when you start talking about risk analysis). Data detractors argue that anything less than perfect data is worthless and prevents you from creating well-designed experiments. This statement is untrue and quite harmful. If you were to wait around for perfect data, you would always be waiting and many learning opportunities would be missed. More importantly and to the heart of this objection, you don’t need perfect data. You just need methods to learn from the messy data you do have. As Douglas Hubbard wrote in 2010 in his book How to Measure Anything, “The fact is that we often have more data than we think, we need less data than we think, and getting more data through observation is simpler than we think.” So, generally speaking, data for security analysis absolutely exists; often times it is just waiting to be collected. You can, with a few alterations, collect and accurately analyze even sketchy data. Modern data analysis methods have evolved to work with the noisy, incomplete, and imperfect data you have. But we will fall off the edge of the world. There is one last point to consider and it’s not so much an objection to data analysis, but an obstacle in data analysis. When you are seen as a domain expert, you are expected to provide answers with confidence. The conflict arises when confidence is confused with certainty. Data analysis requires just enough self-awareness and humility to create space for doubt in the things you think you know. Even though you may confidently state that passwords should be so many characters long with a certain amount of complexity, the reality is you just don’t know where the balance is between usability and security. Confidence needs to be balanced with humility and the ability to update your beliefs based on new evidence. This obstacle in data analysis is not just limited to the primary analyst. Other domain experts involved in the analysis will have to come face to face with their own humility. Not everyone will want to hear that his or her world isn’t flat.

Programming Skills As much as we’d like to portray data science as a glamorous pursuit of truth and knowledge, as we’ve said, it can get a little messy. Okay, that’s an understatement. Working with data is a great deal more uncertain and unkempt than people think and, unfortunately, the mess usually appears early on when you’re attempting to collect and prepare the data. This is something that many classes in statistics never prepare their students for. Professors hand out rather nice and neat data sets ready to be imported into the analysis tool du jour. Once you leave the comfort of the classroom, you quickly realize that the world is a disorganized and chaotic place and data (and its subsequent analyses) are a reflection of that fact. This is a cold, hard lesson in data science: Data comes to you in a wide range of formats, states, and quality. It may be embedded in unstructured or semi-structured log files. It may need to be scraped from a website. Or, in extreme cases, data may come in an overly complex and thoroughly frustrating format known as XML. Somehow, you must find a way to collect, coax, combine, and massage what you’re given into a format that supports further analysis. Although this could be done with a lot of patience, a text editor, and judicious use of summer interns, the ability to whip together a script to do the work will provide more functionality, flexibility, and efficiency in the long run. Learning even basic programming skills opens up a whole range of possibilities when you’re working with data. It frees you to accept multiple forms of data and manipulate it into whatever formats work best with the analysis software you have. Although there is certainly a large collection of handy data conversion tools available, they cannot anticipate or handle everything you will come across. To be truly effective while working with data, you need to adapt to the data in your world, not vice versa.

c01.indd

05:3:58:PM 01/08/2014

Page 8

Gathering Data Analysis Skills

AES-256-Bit Keys Are Twice as Good as AES-128, Right? One natural assumption about AES-256-bit keys is that because they are twice as long as AES-128bit keys, they are twice as secure. We’ve been around information security people when they force a project to use 256-bit keys because they are “twice as good.” Well, let’s look into the math. First, you are talking about bits here, and although 256 bits is twice as many bits as 128, 256-bit keys actually have 2128 times more keys. Break out your slide rules and work through an exercise to try to answer a simple question: If you had access to the world’s fastest super-computer, how many 128-bit keys could you crack? The world’s fastest super computer (at the time of this writing) is the Tianhe-2 in China, which does around 34 petaflops (34 x 1015 floating point operations) per second. If you assume it takes one operation to generate a key and one operation to test it (this is an absurd and conservative assumption), you can test an amazing 17 x 1015 keys per second. But a 128-bit key has 3.4 x 1038 possibilities, which means after a full year of cracking 128-bit keys, you will have exhausted 1.6 x 10 -13 percent of the key space. Even if you run the super-computer for 1,000 years, you will only have searched 0.0000000000016 percent of all the possible keys (and spent a fortune on electricity). To put this simply, the probability of brute-force cracking a 128-bit key is already so infinitesimally small that you could easily round off that probability to zero. But let’s be professional here and say, “Moving from a 128-bit key to a 256 is moving the probability from really-super-duperinfinitesimally-small to really-super-duper-infinitesimally-small x 2128.”

Any modern language will support basic data manipulation tasks, but scripting languages such as Python and R appear to be used slightly more often in data analysis than their compiled counterparts (Java and C). However, the programming language is somewhat irrelevant. The end results (and a happy analyst) are more important than picking any “best” language. Whatever gets the job done with the least amount of effort is the best language to use. We generally flip between Python (pandas) and R for cleaning and converting data (or perhaps some Perl if we’re feeling nostalgic) and then R or pandas for the analysis and visualization. Learning web-centric languages like HTML, CSS, and JavaScript will help create interactive visualizations for the web, as you’ll see in Chapter 11, but web languages are not typically involved in the preparation and analysis of data. There is a tool worth mentioning in this section—the “gateway tool” between a text editor and programming—known as the spreadsheet (such as Microsoft Excel or OpenOffice Calc). Spreadsheets allow nonprogrammers to do some amazing things and get some quick and accessible results. Although spreadsheets have their own sets of challenges and drawbacks, they also have some benefits. If the data is not too large or complex and the task is not deciding the future of the world economy (see the following sidebar), Excel may be the best tool for the job. We strongly suggest seeing Excel as a temporary solution. It does well at quick one-shot tasks. But if you have a repeating analytic task or model that is used repeatedly, it’s best to move to some type of structured programming language. As a cleaning tool, spreadsheets seem like a very good solution at first (especially for those who have developed some skill with them). But spreadsheets are event-driven, meaning they work through clicking, typing, and dragging. If you want to apply a conversion to a row of data, you have to click to select the row and apply a conversion. This works for small data sets or quick tasks, but trust us, you will (more often than

c01.indd

05:3:58:PM 01/08/2014

Page 9

9

10

THE JOURNEY TO DATADRIVEN SECURITY

you think) have to go back to the source data and re-clean it. Another day of log files needs to be processed, or you realize you should have pulled another relationship from the source data, or (gasp) you identify an error in the cleaning process. Something, somewhere, and probably more than once, will cause you to go back to the source and repeat the data cleaning and conversion. Leveraging a spreadsheet means a lot more clicking. Writing a script, on the other hand, enables an easy, flexible, and consistent execution of the cleaning process each time it runs.

The Limits of Spreadsheets On January 16th, 2013, J.P. Morgan issued a report to shareholders titled “Report of JPMorgan Chase & Co. Management Task Force Regarding 2012 CIO Losses” (full citation in Appendix B) in which they investigate the loss of $6 billion in trades. They perform a detailed examination of the breakdown and describe the spreadsheet as a contributory factor. “During the review process, additional operational issues became apparent. For example, the model operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” They uncovered a huge challenge with spreadsheets, which is the consistency and integrity of the computations made in the data. “Data were uploaded manually without sufficient quality control. Spreadsheet-based calculations were conducted with insufficient controls and frequent formula and code changes were made.” They continue on and label the Excel-based model as “error prone” and “not easily scalable.” As with any complex system, catastrophe requires multiple failures.2 We cannot point to their use of an “error-prone” spreadsheet as the primary cause, but certainly it appears to have contributed in the loss of $6 billion. 2

See Richard Cook’s “How Complex Systems Fail” for a brief and wonderful discussion of this topic: http://www.ctlab.org/documents/How%20Complex%20Systems%20Fail.pdf

After the data is ready for analysis, you can continue to benefit from understanding how to program. Many of the languages mentioned here have robust data analysis features built into (or onto) them. For example, statisticians developed the R language specifically for the purpose of performing data analysis. Python—with the addition of packages like NumPy, SciPy, and pandas—offers a rich and comparable data analysis environment. But, preparing and analyzing the data is not enough. You also need to communicate your results, and one of the most effective methods for that is data visualization (covered in several chapters of this book). Again, Excel can produce graphics. With judicial modification of the default settings, you can get good visualization with Excel. However, in our opinion, flexibility and detail in data visualization are best achieved through programming. Both Python and R have some feature-rich packages for generating and exporting data visualization. In many cases, however, you can combine all these steps and functions in the same script. You can write one script to grab the source data, manipulate/clean it, run the analysis on it, and then visualize the results.

Data Management If there is one skill you can hold off on learning, it’s data management, but you can put it off only for a while. Within information security (as well as most other disciplines), your data can quickly multiply. If you

c01.indd

05:3:58:PM 01/08/2014

Page 10

Gathering Data Analysis Skills

don’t learn to manage it, the strain of ever-expanding data will take its toll on efficiency and effectiveness. As mentioned, you can leverage spreadsheets for the simple analyses. You will quickly outgrow that stage and should be resolved to expanding your repertoire to programming languages and simple formats like comma-separated value (CSV) files. At this point, you may see some benefits by moving your data into a database, but it still may not be necessary. As the data repository grows, you reach a tipping point, either through the complexity of the data or the volume of data. Moving to a more robust data management solution becomes inevitable. There is a misconception that the large relational databases of yesteryear are reserved for the biggest projects, and that is not a helpful mindset. Many of the database systems discussed in Chapter 8 can be installed on a desktop and make the analysis more efficient and scalable. Once your data management skills become more natural, such skill can benefit even the smallest projects. We’ve installed a local database and imported the data even for some smaller one-time projects. When discussing data management skills, we naturally focus on databases. You want to have enough knowledge to install a relational or NoSQL database to dump the data in and leverage it for analysis. However, data management is more than databases. Data management is also about managing the quality and integrity of the data. You want to be sure the data you are working with isn’t inadvertently modified or corrupted. It doesn’t hurt to have some checks that keep an eye on data quality and integrity, especially over long-term data analysis efforts (metrics). It’s like the concept of unit tests in software development where the smallest piece of testable code in an application is isolated from the larger body of code and checked to determine whether it behaves exactly as expected. You may want to automate some data integrity checking after any new import or conversion, especially when the data analysis has sufficient efficacy to be performed regularly and used as a metric or control. Finally, we work in information security, and we’d be negligent if we didn’t talk about the security of the data for a bit here. Take a step back for some context first. There seems to be a pattern repeating: Some passionate need drives a handful of geniuses to work their tails off to produce an elegant solution, but the security of their system is not their primary concern; meeting the functional need is. As an example, when the UNIX platform was first developed it was intended to be a shared (but closed) platform for multiple users who use the platform for programs they would write. As a result, most of the authentication and permissions were constructed to protect the system from unintentional errors in their programs, and not from malicious users.3 The point here is that “young” technology typically places an emphasis on functionality over security. With the fast-paced and passionate push of the current data revolution, we are definitely seeing more emphasis on functionality and less on security. New data management platforms such as Hadoop and NoSQL environments were designed to solve a data problem and were not designed (initially) with many of the security policies or compliance requirements of most enterprise networks (though they are quickly learning). The result is a distributed computing platform with some difficult security challenges. The authentication and security features are far better than the early days of UNIX; they typically do not compare to the security and features of the more established relational databases. We won’t focus too much on this point, but whatever data management platform is chosen, don’t assume the security is built in.

3

For an example of the focus on functionality and preventing error over stopping misuse, early authentication systems would store the user passwords in a clear text file. See Morris and Thompson, 1979 (full reference in Appendix B) for a discussion.

c01.indd

05:3:58:PM 01/08/2014

Page 11

11

12

THE JOURNEY TO DATADRIVEN SECURITY

Statistics Perhaps we are a little biased here, but picking up some statistics skills will improve almost every aspect of your life. Not only will it change the way to see and learn from the world around you, but it will also make you more interesting and probably even a bit more attractive to those around you. Seriously, though, statistics (we are discussing it as a single skill here) is a very broad topic and quite a deep well to drink from. We use the term to describe the varied collection of techniques and methods that have evolved (and continue to evolve) to attempt to learn from data. These skills include the classic statistical approaches as well as newer techniques like data mining and machine learning. Luckily, you can learn from the successes and mistakes of the generations of rather brilliant people who have worked with data very similar to ours, even if their calculations were performed with pencil and paper versus silicon circuits. Regardless of your personal belief in the utility of statistics and data analysis, when it comes to information security, there is a vast amount of evidence showing its significant influence and benefit to almost every other field of science. Aside from the obvious “learning from data” approach, there are a few perhaps more subtle reasons to focus on improving your statistics skills: ●

Even though data never lies, it is far too easy to be tricked by it—As heuristic beings, we are capable of pulling out patterns and meaning from the world around us. The ability to see subtle connections and patterns is usually helpful, and people use that skill on a daily basis. However, that skill can also mislead you, and you may think you see patterns and connections when none exist. A good understanding of statistics can raise awareness of this, and its tactics can help minimize incorrect conclusions.



Even though we just said that data never lies, the way it’s generated and collected can create deceptive conclusions—Consider that asking for the opinions of those around us may mistakenly confirm our own opinions, because we naturally surround ourselves with like-minded people. Data itself may not be deceptive, but it’s quite easy to think the data means something it does not, as in the story of the 1936 election polling (see the following sidebar).

Statistics is not just a collection of tools; it is a collection of toolboxes each with their own set of tools. You can begin with descriptive statistics, which attempt to simplify the data into numbers that describe aspects of the data. For example, you can calculate the center of the data by calculating the mean, mode, or median; you can describe how spread out the data is with the standard deviation; you can explain the symmetry of the data with skew; and you can describe the width of peak with the kurtosis. However, any time you simplify the data, you lose some level of detail and this is where visualization can serve you well. With visualizations, you create a single representation, or message, that can contain and communicate every data point, without simplification. Think of this type of visualization as being a “descriptive visualization” since it is doing nothing more than simply describing the data to its viewers. Aside from the challenge of oversimplifying, descriptive statistics is also limited to describing only the data that you collect. It is not correct to simply scan a few systems, calculate the mean number of vulnerabilities, and announce that the statistic describes all the systems in the environment. Inferential statistics helps you go beyond just describing the observations and enables you to make statements about a larger population given a smaller representative sample from that population. The key word there is “representative.” Statistics teaches you about the “design of experiments” (thanks to Fisher and his peers) and this will help you gather data so that you reduce the probability of being misled by it. You want to have confidence that the samples you collect are representative of the whole. That lesson has been learned many times in the past by a good number of people.

c01.indd

05:3:58:PM 01/08/2014

Page 12

Gathering Data Analysis Skills

When Data Deceives The magazine Literary Digest ran a large public opinion poll in an attempt to predict the 1936 presidential race. They gathered names from a variety of sources, including the telephone directory, club memberships, and magazine subscriptions. They ended up with more than 2 million responses and predicted a clear winner: Alfred Landon (for those not up on their American history, the Democratic candidate, Theodore Roosevelt, won that election, carrying 46 states). The problem with the Literary Digest poll began long before a single response was collected or counted. Their trouble began with where they went looking for the data. Remember the year was 1936 and the great depression in the United States hadn’t let up yet. Yet, they polled people with phones, club memberships, and magazine subscriptions. They systematically polled the middle and upper class, which generally leaned toward Landon, and arrived at an answer that was mathematically correct and yet completely wrong. The data did not lie. If they wanted to know which presidential candidate would get the most votes among Americans with a phone, club membership, or magazine subscription, the data told an accurate story. However, they weren’t looking for that story. They wanted to know about all registered voters in the United States, but through their selection of sources they introduced bias into their sample and drew meaning from the data that simply did not exist. The fact that they had an unprecedented 2 million responses did not help improve the accuracy of their poll. Gathering more data with the same systemic flaw just generates a larger sample with the bias. To drive that point home, in the same 1936 election, a young man named George Gallup had gathered a relatively small sample of just 50,000 voters but he applied a much more representative sampling method and correctly predicted Franklin Roosevelt as the winner of the 1936 elections. The Literary Digest closed its doors a few years later, but Gallup, Inc. is now an international organization, still conducting surveys and gathering data.

You should always approach statistics with a healthy degree of respect and humility. As you slide more and more into the depths of applied mathematics, you’ll realize how easy it is to find meaning where none exists (technically called a type I error). But what is more important to understand here is that this error can occur with or without data. Even before you fill a single cell in an Excel spreadsheet, you can make this mistake. The best tools in the toolbox are designed to limit the chance of these types of errors, but statistics alone is not enough. You need the combination of experience and data to decrease the chance of being misled. Errors can and will occur even with this combination, but you can reduce the frequency of these errors by applying the rigor and methods within statistics. Such rigor will place you in a much better position to learn from mistakes when they do occur. Having built up the application of statistics on a pedestal, we should point out that you can learn a lot from data without advanced statistical techniques. Recall the “descriptive visualization” mentioned previously. Take some time to look around at many of visualizations out there; they are generally not built from statistical models, but describe some set of data and show the relationships therein. Snow’s map of the areas around the water pump on Broad Street in Figure 1-1 did not involve logistic regression or machine learning; this map was just a visual description of the relationship between address and deaths. There is no doubt that you can improve your ability to secure your information assets with simple statistical methods and descriptive visualizations. All it takes is the patience to ask a question, gather the evidence, make sense of it, and communicate it to others.

c01.indd

05:3:58:PM 01/08/2014

Page 13

13

14

THE JOURNEY TO DATADRIVEN SECURITY

Visualization (a.k.a. Communication) The final skill is visualization, but really it is about communication. There are multiple ways to classify the types of visualizations out there, but for this discussion we want to talk about two general types of visualization, which are separated by who you want to read and interpret the visualization. The distinction we make here is quite simple: 1) visualizing for ourselves, or 2) everyone else. For example, Figure 1-2 shows four common plots, which are automatically generated by R’s lm() function (for linear regression) and they are used to diagnose the fit of a linear regression model (which you’ll run in Chapter 5). Let’s face it; these plots are quite ugly and confusing unless you’ve learned how to read them. We would not include these in our next presentation to the Board of Directors. This type of visualization serves to provide information to the analyst while working with the data, or in this case about a data model.

FIGURE 1-2 Diagnostic plots for regression model of bot infections

c01.indd

05:3:58:PM 01/08/2014

Page 14

Gathering Data Analysis Skills

These graphs are generated as a way to understand certain relationships and attributes of the model. They communicate from the data to the analyst and are used to visually inspect for anomalies, strength of relationships, or other aspects of the data for the purpose of understanding it better. Very little effort is spent on making these attractive or presentable since they are part of the analysis, not the result. The other type of visualization exists to communicate from the analyst to others and serves to explain the story (or the lack of a story) the analyst uncovered in the data. These are typically intended to be attractive and carry a clear message, as it is a communication tool for non-analysts. Figure 1-3 (which you’ll learn to generate in Chapter 5) is derived from the same data as Figure 1-2 but is intended for a completely different audience. Therefore, it is cleaner and you can pull a message for each of the 48 continental states from this one picture.

FIGURE 1-3 Visualization for communicating density of ZeroAccess bot infections

Combining the Skills You need some combination of skills covered in this chapter in order to make the analysis run smoother and improve what you can learn from the data. Although we may have portrayed these skills as belonging to a single person, that is not required. As the data grow and the demands for analysis become more embedded into the culture, spreading the load among multiple individuals will help lighten the load. Moreover, if you are just beginning to build your security data science team, you may be setting yourself up for an impossible task if you try to find even one individual with all these skills. Take the time to talk through each of these points with candidates to ensure there is at least some element of each of the skills discussed here.

c01.indd

05:3:58:PM 01/08/2014

Page 15

15

16

THE JOURNEY TO DATADRIVEN SECURITY

Centering on a Question While we consider data analysis to be quite fun, it is never performed for its own sake. Data analysis is always performed within a larger context and understanding that context is the key to successful data analysis. Losing sight of that context is like running a race without paying attention to where the finish line is. You want to have a good concept of what you’re trying to learn from the data. Therefore, every good data analysis project begins by setting a goal and creating one or more research questions. Perhaps you have come across a visualization or research and thought, “Yeah, but so what?” That reaction is probably caused by the lack of a well-prepared research question in the analysis. Remember, the purpose of data analysis is to learn from the environment; learning can be done with or without data (with varying degrees of success). Creating and following a good research question is a component of good learning, not just of good data analysis. Without a well-formed question guiding the analysis, you may waste time and energy seeking convenient answers in the data, or worse, you may end up answering a question that nobody was asking in the first place. For example, Figure 1-4 shows the amount and categories of spam blocked at an organization during a given month. Thanks to the logs generated by an email filtering system, it is entirely possible to collect and show this information. However, the questions this data answers (and whatever subsequent actions it may drive) are of little interest to the typical organization. It’s hard to imagine someone looking at this graphic and thinking, “Let’s understand why travel spam was up in December.” Outcomes like those shown in Figure 1-4 are the result of poor question selection or skipping a question altogether—data analysis for the sake of analyzing data, which does not help to inform anyone about the environment in any meaningful way. A good research question around spam might be, “How much time do employees spend on spam that is not blocked by the spam filter?” Just counting how much spam is blocked has little value since it will have no contextual meaning (nobody can internalize the effective difference between 1,000 and 5,000 spam emails). What you want to know is the impact spam has on employee productivity. Although “productivity” may be a challenge to measure directly, you can flip that around and just assume it is impossible to be productive when employees are reading and deleting spam. Therefore, what you really want to measure is the time employees spend dealing with unfiltered spam. Now that you’ve framed the question like this, it’s clear that you can’t look to the spam filter logs to answer this spam-related question. You really don’t care that thousands of emails were blocked at the perimeter or even what proportion of spam is blocked. With a research question in hand, you now know to collect a measurement of employee time. Perhaps you can look for logs from the email clients of events when users select the “mark as spam” option. Or perhaps, it’s important enough to warrant running a short survey in which you select a sample of users and ask them to record the amount of spam and time spent going through it for some limited period of time. Either way, the context and purpose of the analysis is being set by the research question, not by the availability of data.

c01.indd

05:3:58:PM 01/08/2014

Page 16

Centering on a Question

FIGURE 1-4 Amount of spam by category—the result of a poor research question

Creating a Good Research Question Creating a good research question is relatively straightforward but requires a bit of practice, critical thinking, and discipline. Most research questions will serve as pivot points for a decision or action (or inaction). Knowing the context of the result may also help determine what to collect. Going back to the spam example, maybe you learn there is some tolerance for wasted time. If so, maybe you don’t need to how much time is wasted, but just whether the time spent dealing with spam is simply above or below that tolerance. Planning the analysis with that information could change how data is sought or simplify data storage and analysis. You usually begin with some topic already in mind. Perhaps you are measuring the possible benefit from a technical change or you are trying to protect a specific asset or data type, or simply trying to increase your visibility into a network segment. Even if you just have a general sense of direction, you can begin by coming up with a series of questions or things you’d like to know about it. Once you have a good list of questions, you can whittle those down to one or just a few related questions. Now the fun really begins—you have to make those questions objective.

c01.indd

05:3:58:PM 01/08/2014

Page 17

17

18

THE JOURNEY TO DATADRIVEN SECURITY

Consider this simple example. The Human Resources department submits a proposal to post a searchable lunch menu from the company’s cafeteria to the Internet. Although this may raise all sorts of questions around controls, processes, and procedures, suppose the core security-oriented decision of the proposal is limited to either allowing authentication with the corporate username and password, or investing in a more expensive two-factor authentication mechanism. You may brainstorm a question like “How much risk does single factor authentication represent?” Or perhaps, “How effective is two-factor authentication?” These types of questions are really nice and squishy for the initial phase of forming a research question, but not well suited to serious analysis. You would struggle to collect evidence of “risk” or “effectiveness” in these questions. So, you must transform them to be more specific and measurable as an approach to inform the decisions or actions in context. Perhaps you start by asking how many services require single-factor versus dual-factor authentication. You might also like to know how many of those services have been attacked, and with what success, and so on. Perhaps you have access to a honey pot and can research and create a profile of Internet-based brute force attempts. Perhaps you can look at the corporate instance of Microsoft Outlook Web Access and create a profile of authentication-based attacks on that asset. These are all good questions that are very answerable with data analysis. They can produce outcomes that can help support a decision.

Exploratory Data Analysis Now that we’ve explained how a good data analysis should begin, we want to talk about how things will generally occur in the real world. It’d be great to start each day with a hot, caffeinated beverage, a clear research question, and a bucket of clean data, but in reality you’ll usually have to settle for just the hot, caffeinated beverage. Often times, you do start off with data and a vague question like, “Is there anything useful in this data?” This brings us back to John Tukey (remember him from earlier in this chapter?). He pioneered a process he called exploratory data analysis, or EDA. It’s the process of walking around barefoot in the data, perhaps even rolling around a bit in it. You do this to learn about the variables in the data, their significance, and their relationships to other variables. Tukey developed a whole range of techniques to increase your visibility into and understanding of the data, including the elegantly simple stem and leaf plot, the five-number summary, and the helpful box plot diagram. Each of these techniques is explained or used later in this book. Once you get comfortable with the data, you’ll naturally start to ask some question of it. However, and this is important, you always want to circle back and form a proper research question. As Tukey said in his 1977 book, “Exploratory data analysis can never be the whole story.” He refers to EDA as the foundation stone and the first step in data analysis. He also said that, “Exploratory data analysis is an attitude, a state of flexibility, a willingness to look for those things that we believe are not there, as well as those we believe to be there.” With that in mind, most of the use cases in this book use exploratory analysis. We will take an iterative approach, and you’ll learn as you walk around in the data. In the end though, you need to remember that data analysis is performed to find an answer to a question that’s worthy of asking.

Summary The cyber world is just too large, has too many components, and has grown far too complex to simply rely on intuition. Generations of people before us have paved the way; and with a mixture of domain expertise, programing experience and statistics combined with data management and visualization skills, we can improve on our ability to learn from this complex environment through the data it produces.

c01.indd

05:3:58:PM 01/08/2014

Page 18

Recommended Reading

In the next chapter we will walk you through setting up your data analysis environment, and then proceed into Chapter 3, where you will be guided through a gentle introduction to data analysis techniques.

Recommended Reading The following are some recommended readings that can further your understanding on some of the topics we touch on in this chapter. For full information on these recommendations and for the sources we cite in the chapter, please see Appendix B. “Conditions for Intuitive Expertise: A Failure to Disagree” by Daniel Kahneman and Gary Klein—This dense article covers a lot of ground but gets at the heart of when and why you should look for help in complex environments and when your expertise is enough. The references in this paper also provide a good jumping point to answer questions about how people learn. “How Complex Systems Fail” by Richard Cook—If you are wondering whether or not you are dealing with complexity, this short and brilliant paper looks at qualities of complex systems and how they fail. Naked Statistics: Stripping the Dread from Data by Charles Wheelan—This is a great introductory book to statistical concepts and approaches, written in an easy-to-consume style and written so that the math is not required (but it is included).

c01.indd

05:3:58:PM 01/08/2014

Page 19

19

2 Building Your Analytics Toolbox: A Primer on Using R and Python for Security Analysis “If you add a little to a little and do this often, soon the little will become great.” Hesiod

c02.indd

11:15:24:AM 12/31/2013 Page 21

22

BUILDING YOUR ANALYTICS TOOLBOX

Before you jump right into the various use cases in the book, it’s important to ensure you at least have a basic familiarity with the two most prominent languages featured in nearly all of the scenarios: Python (www.python.org/) and R (www.r-project.org/). Although there are an abundance of tools available for data analysis, we feel these two provide virtually all the features necessary to help you go from data to discovery with the least amount impedance. A sub-theme throughout the book, and the distilled process at the heart of security data science, is idea, exploration, trial (and error) and iteration. It is ineffective at best to attempt to shoehorn this process into the edit/compile/run workflow found in most traditional languages and development environments. The acts of performing data analyses and creating informative visualizations are highly interactive and iterative endeavors. Despite all of their positive features, even standalone Python and R do not truly enable rich, dynamic interaction with code and data. However, when they are coupled with IPython (http:// ipython.org/) and RStudio (www.rstudio.com/), respectively, they are transformed into powerful exploration tools, enabling rapid development and testing of everything from gnarly data munging to generating sophisticated visualizations. This chapter provides pointers to installation resources for each tool, introduces core features of each language and development environment, and explains the structure of the examples you will find in the remaining chapters of the book. Each chapter will have the following “setup” code (Listing 2-0) at the beginning to ensure you have the proper environment in place to run the code examples. There are example scripts at the end of this chapter that will help you create structured directories if you are typing as you go.

LISTING 2-0 # This is for the R code in the chapter # set working directory to chapter location # (change for where you set up files in ch 2) setwd("~/book/ch02") # This is for the Python code in the chapter # loads the necessary Python library for chdir import os # set working directory to chapter location os.chdir(os.path.expanduser("~") + "/book/ch02")

Why Python? Why R? And Why Both? A discussion of which programming language is better than another for a certain set of tasks often turns (quickly) into a religious war of words that rarely wins converts and never becomes fully resolved. As a security data scientist, you will find that you do not have the luxury of language bias. There will be times when one language shines in one area while a different one shines in another, and you need the skills of a diplomat to bring them both together to solve real problems. We’ve honed in on both R/RStudio and Python/IPython/pandas in this book, as they are the two leading data analysis languages/environments with broad similarities but also with unique elements that make them work well for some tasks and not others. As you read about the rationale behind each choice and as you become proficient in one or both environments, do not lull yourself into a sense of complacency. For readers with an existing programming background, getting up to speed with Python should be pretty straightforward and you can expect to be fairly proficient within 3–6 months, especially if you convert

c02.indd

11:15:24:AM 12/31/2013 Page 22

Why Python? Why R? And Why Both?

some of your existing scripts over to it as a learning exercise. Your code may not be “pythonic” (that is, utilizing the features, capabilities, and the syntax of the language in the most effective way), but you will be able to “get useful stuff done.” For those who are new to statistical languages, becoming proficient in R may pose more of a challenge. Statisticians created R, and that lineage becomes fairly obvious as you delve into the language. If you can commit to suffering through R syntax and package nuances, plus commit to transitioning some of your existing Excel workflows into R, you too should be able to hang with the cool kids on the #rstats Twitter stream in 3–6 months.

Note A hallmark of a good data scientist is adaptability and you should be continually scouring the digital landscape for emerging tools that will help you solve problems. We introduce you to some of these upstarts in Appendix A.

Why Python? Guido van Rossum created the Python programming language in December of 1989 to solve a problem. He and his colleagues needed a common way to orchestrate system administration tasks that could take advantage of specific features in the operating systems they were using at that time. Although there were existing interpreted, administrator-friendly tools and languages available, none were designed (from Guido van Rossum’s point of view) with either the flexibility or extensibility features baked into the design principles of Python. Python’s flexibility and extensibility (and the fact that it was free as in both “speech” and “beer”) were especially appealing to the scientific, academic, and industrial communities starting in the early 2000s. Innovators in these fields quickly adapted this general-purpose programming language to their own disciplines to solve problems easier than—ostensibly—the domain-specific languages available at that time. You have to search long and hard to find a file-type Python cannot read, a database Python cannot access, and an algorithm Python cannot execute. As you familiarize yourself with the language, Python’s ability to acquire, clean, and transform source data will quickly amaze you, but those tasks are just the early steps in your analysis and visualization process. It wasn’t until 2008 that the pandas (http://pandas .pydata.org/) module was created by AQR Capital Management to provide “Pythonic” counterparts to the analytical foundations of languages like R, SAS, or MATLAB, which is where the “real fun” begins. Although Python’s interpreter provides an interactive execution shell, aficionados recognized the need to extend this basic functionality and developed an even more dynamic and robust interactive environment—IPython—to fill the need. When coupled with the pandas module, budding data analysts now have a mature and data-centric toolset available to drive their quest for knowledge.

Why R? Unlike Python, R’s history is inexorably tied to its domain specific predecessors and cousins, as it is 100 percent focused and built for statistical data analysis and visualization. Although it too can access and manipulate various file types and databases (and was also designed for flexibility and extensibility), R’s lisp- and S-like syntax plus extreme focus on foundational analytics-oriented data types has kept it, mostly, in the hands of the “data crunchers.”

c02.indd

11:15:24:AM 12/31/2013 Page 23

23

24

BUILDING YOUR ANALYTICS TOOLBOX

Base R makes it remarkably simple to run extensive statistical analyses on your data and then generate informative and appealing visualizations with just a few lines of code. More modern R libraries such as plyr and ggplot2 extend and enhance these base capabilities and are the foundations of many of mind- and eye-catching examples of cutting-edge data analysis and visualization you have no doubt come across on the Internet. Like Python, R also provides an interactive execution shell that has enough basic functionality for general needs. Yet, the desire for even more interactivity sparked the development of RStudio, which is a combination of integrated development environment (IDE), data exploration tool, and iterative experimentation environment that exponentially enhances R’s default capabilities.

Why Both? If all you have is a hammer, everything starts looking like a nail. There are times when the flexibility of a general-purpose programming language comes in very handy, which is when you use Python. There are other times when three lines of R code will do something that may take 30 or more lines of Python code (even with pandas) to accomplish. Since your ultimate goal is to provide insightful and accurate analyses as quickly and as visually appealing as possible, knowing which tool to use for which job is a critical insight you must develop to be as effective and efficient as possible. We would be a bit dishonest, though, if we did not concede that there are some things that Python can do (easily or at all) that R cannot, and vice-versa. We touch upon some of these in the use cases throughout the book, but many of the—ah—“learning opportunities” will only come from performing your own analyses, getting frustrated (which is the polite way of saying “stuck”), and finding resolution by jumping to another tool to “get stuff done.” This situation comes up frequently enough that there is even an rJython package for R that lets you call Python code from R scripts, and rpy and rpy2 modules for Python that let you call R code from Python scripts. By having both tools in your toolbox, you should be able to tackle most, if not all, of the tasks that come your way. If you do find yourself in a situation where you need functionality you don’t have, both R and Python have vibrant communities that are eager to provide assistance and even help in the development of new functions or modules to fit emerging needs.

Jumpstarting Your Python Analytics with Canopy It is possible to set up an effective and efficient installation of Python, IPython, and pandas from the links we’ve provided, especially if you are already familiar or proficient with Python; however, we don’t recommend it. For those new to Python, the base installation leaves you with the core interpreter and extensive set of built-in, standard libraries. You can think of it as a having an inexpensive blank canvas and introductory set of paints and brushes. You’ll need better materials to create a work of art, and that’s where the enhanced statistics, computational and graphing libraries come in. Even the most stalwart Python aficionado can find it challenging to manage dependencies and updates for the numerous necessary components. This can waste hours of your time. This is especially true if you have to manage analytics processes across multiple operating systems and environments.

c02.indd

11:15:24:AM 12/31/2013 Page 24

Jumpstarting Your Python Analytics with Canopy

To facilitate both ease of installation and maintenance, we highly recommend using the freely available Enthought Canopy Python data analysis environment (www.enthought.com/products /canopy/). Canopy works on Linux, Microsoft Windows, and Mac OS X; has a built-in Python integrated development environment (IDE); incorporates a meta-package manager that will help you keep current with changes in every dependent package and module; and also comes with an IPython console. For those working in organizations that shy away from open source solutions, Enthought also offers commercially supported options for Canopy. Given that there is a comprehensive installation, setup, and update guide available (http://docs .enthought.com/canopy/quick-start.html), we will not go over step-by-step instructions on how to install Canopy for each platform, but we strongly recommend reviewing the documentation before attempting any of the Python examples in the book. Once the base installation is complete, getting started should be as straightforward as opening up the Canopy application, which will display the welcome screen (see Figure 2-1). One of the first steps you should perform is to instruct Canopy to display all images inline within the IPython console. This is an optional step, but it will help keep all output self-contained within the Canopy environment. You can change this setting once you have an open Canopy editor session by going into the Preferences window, finding the Python tab, and selecting the Inline (SVG) option for the PyLab Backend preference (see Figure 2-2). To validate that your environment is set up properly, run the following code in the IPython console area in the editor: import pandas as pd import numpy as np np.random.seed(1492) test_df = pd.DataFrame({ "var1": np.random.randn(5000) }) test_df.hist()

and verify that it produces the output shown in Figure 2-3. If it does, you have the basic environment installed and are ready to start working through the data analysis examples. If the bar chart is not displayed, you may need to check your installation steps or verify that you have the proper graphics display options mentioned earlier. Once everything is working properly, you should carve out 10 minutes to read through “Learn Python in 10 Minutes” (www.stavros.io/tutorials/python/) by Stavros Korokithakis, if you are not familiar with Python, and then spend 10 additional minutes to go through the “10 Minutes to Pandas” tutorial (http://pandas.pydata.org/pandas-docs/dev/10min.html) to learn a bit more about the pandas data analysis module.

Understanding the Python Data Analysis and Visualization Ecosystem Although there are scores of libraries available for Python, a few stand out when it comes to crunching data. We call these libraries an “ecosystem” because each library is developed and supported by a different organization, community, or individual. They coordinate with each other, but the coordination is loose.

c02.indd

11:15:24:AM 12/31/2013 Page 25

25

26

BUILDING YOUR ANALYTICS TOOLBOX

FIGURE 2-1 Canopy welcome screen Here are some libraries that you will find yourself using in nearly every project:

c02.indd



NumPy (www.numpy.org/)—A library providing foundational capabilities for creating multidimensional containers of generic data, performing a wide range of operations on data and generating random numbers. It also implements the capability to “broadcast” operations to Python objects, which can make for succinct and highly efficient code.



SciPy library (www.scipy.org/scipylib/index.html)—Built on top of NumPy, this library makes quick work of array-oriented operations and provides a facility to expand NumPy’s “broadcast” operations to other types of data elements in Python; it also provides additional statistical operations.

11:15:24:AM 12/31/2013 Page 26

Jumpstarting Your Python Analytics with Canopy



Matplotlib (http://matplotlib.org/)—The most powerful and commonly used library to turn your data into production-quality images in Python.



pandas (http://pandas.pydata.org)—A library providing high-performance, easy-touse data structures and data analysis tools; pandas introduces the Data.Frame type into the Python namespace, which we discuss in more detail in the “Introducing Data Frames” section later in the chapter. Although this may cause some die-hard Python folks to cringe, pandas, in essence, makes Python more like R and should make it easier for you to jump between languages.

These modules, combined with IPython, are sometimes referred to the core components of the SciPy stack (which is confusing, since it contains the SciPy library). You can read more about the stack at www.scipy.org/.

FIGURE 2-2 Canopy IDE with preferences open As you make your way through this ecosystem, you will notice the following code pattern emerge: import import import import import

numpy as np scipy as sp matplotlib as mpl matplotlib.pyplot as plt pandas as pd

c02.indd

11:15:24:AM 12/31/2013 Page 27

27

28

BUILDING YOUR ANALYTICS TOOLBOX

FIGURE 2-3 Test IPython console output The import statement loads the functions and variables of the Python code in those libraries and makes their names and overall functionality available in the current Python working session. The as component of the statement provides an abbreviated reference for the functions, objects, and variables in the module. Since you’ll be using many of the components of each of the modules in the SciPy stack on a regular basis, you will save time and typing if you create a text file to use as a basic template and include these imports and other (future) much reused code built into it. You will, of course, use other packages for connecting to databases, reading from files, and performing other functions and you can burn countless hours perusing all the nifty modules at the Python Package Index (PyPI), https://pypi.python.org/pypi, but the ones associated with the SciPy stack will become familiar and regular companions on your data science journey.

c02.indd

11:15:24:AM 12/31/2013 Page 28

Jumpstarting Your Python Analytics with Canopy

Python “Gotchas” There are two features of Python that are liable to both frustrate and perhaps become problematic for new users. The first “gotcha” is whitespace. Spaces are significant in Python code. There are no {} braces or begin/end pairs to signify a block of code. You must use consistent indentation to identify groups of statements that will execute together. Inconsistency will result either in error messages from the interpreter or cause your code to fail or just not work as expected. Most modern text editors or IDE can be configured to take care of this for you. The second “gotcha” is the lack of a requirement to declare variables before using them. Initializing a variable named breaches to some value then inadvertently referring to it later as breached may not throw an error in the interpreter, but will most assuredly generate unexpected output.

Canopy’s package manager (http://docs.enthought.com/canopy/quick-start/ package_manager.html) makes it very easy to keep the core Python installation and all associated packages updated and current. If you’ve chosen the manual installation route, you should rely on the package manager of your operating system for the base Python interpreter installation. Updating the individual add-on modules can be accomplished with the following short Python script: import pip from subprocess import call for distributions in pip.get_installed_distributions(): call("pip install --upgrade " + distributions.project_name, shell=True)

A Word about Python Versions The Python examples in this book were created under Python 2.7. At the time of this writing, Canopy also uses Python 2.7. There are currently two major production versions of Python, 2.7.x and 3.3.x. Python 3 introduced numerous changes into the default behavior of Python 2.7, and a good number of packages have updated to be compatible with the newer version. However, many packages are still compatible only with Python 2.7. The stability and ubiquity of Python 2.7 make it a good choice to begin exploring Python for data analysis. For more information on the changes between Python 2.7 and Python 3.3 refer to “What’s New In Python 3.0” (http://docs.python.org/3/whatsnew/3.0.html).

Setting Up Your R Environment To build your R/RStudio environment, you will need to download and install R (http://cran .rstudio.com/), and then do the same for RStudio (www.rstudio.com/ide/download/).

c02.indd

11:15:24:AM 12/31/2013 Page 29

29

30

BUILDING YOUR ANALYTICS TOOLBOX

Both links provide full installation details for Linux, Windows, and Mac OS X systems, so we won’t delve into the minutiae in this section. You do, however, need to make a choice when you install RStudio, as it comes in two flavors: Desktop and Server. Both provide the same core features: ●

Built-in IDE



Data structure and workspace exploration tools



Quick access to the R console



R help viewer



Graphics panel viewer



File system explorer



Package manager



Integration with version control systems

The primary difference is that one runs as a standalone, single-user application (RStudio Desktop) and the other (RStudio Server) is installed on a server, accessed via browser, and enables multiple users to take advantage of the compute infrastructure. If you are not familiar with R or RStudio, begin by downloading and installing RStudio Desktop. (All examples in this book involving RStudio assume you are working in the Desktop version.)

Note For those of you limited to working with commercially supported tools, Revolution Analytics (www.revolutionanalytics.com/support/) provides commercial offerings and technical support for R.

Once everything is installed, open RStudio and verify that you see the default workspace, which should look similar to Figure 2-4. If all is working correctly, you should take some time to walk through “A (Very) Short Introduction to R” by Paul Torfs and Claudia Brauer (http://cran.r-project.org/doc/contrib/ Torfs%2BBrauer-Short-R-Intro.pdf). It will run through just enough of the basics of the R language and RStudio environment to make you dangerous. Although you can use the built-in package manager with RStudio to install packages, you will eventually come to the realization that using the console method is much more convenient. To get familiar with this process right away, you should install the ggplot2 package, which is the primary graphics library used in the book’s examples. Installation is as straightforward as entering the following into the RStudio console pane: > install.packages("ggplot2") Installing package(s) into '/Library/Frameworks/R.framework/

c02.indd

11:15:24:AM 12/31/2013 Page 30

Jumpstarting Your Python Analytics with Canopy

Versions/3.0.0/Resources/library' (as 'lib' is unspecified) trying URL 'http://cran.mirrors.hoobly.com/bin/macosx/leopard/ contrib/3.0.0/ggplot2_0.9.3.1.tgz' Content type 'application/x-gzip' length 2659920 bytes (2.5 Mb) opened URL ================================================== downloaded 2.5 Mb

The downloaded binary packages are in /var/folders/qg/vmtfcv1j7vjfq_p5zw86mk7mxkhymk/T/ /RtmpiZ5FD3/downloaded_packages

FIGURE 2-4 RStudio’s default workspace

c02.indd

11:15:24:AM 12/31/2013 Page 31

31

32

BUILDING YOUR ANALYTICS TOOLBOX

Run the following code to verify that ggplot2 has been installed correctly and to ensure your R/ RStudio environment is functional: library(ggplot2) set.seed(1492) test.df = data.frame(var1=rnorm(5000)) ggplot(data=test.df) + geom_histogram(aes(x=var1))

If there are no errors and you see the bar chart in Figure 2-5, your environment is ready to run through the examples in the book. If you do encounter errors, try starting the standalone R (not RStudio) application, re-install the ggplot2 package in that R console, and execute the bar chart code in that environment. If that works, try uninstalling and re-installing RStudio to fix the errors.

FIGURE 2-5 Test R/RStudio output Like Python, R has a vast repository of useful modules that can simplify many tasks. We will introduce a few of them in the coming chapters, but you should also peruse the Comprehensive R Archive Network (CRAN) (http://cran.r-project.org/web/packages/) to see the breadth and depth covered by a host of contributors.

c02.indd

11:15:24:AM 12/31/2013 Page 32

Introducing Data Frames

A Word about R Versions The R examples in this book were created under R version 3.0. Some package managers may still have R version 2.15 as the default version. It is recommended that you install R from the sources identified in this chapter to ensure maximum compatibility with the packages we use in later chapters.

Introducing Data Frames If you are coming from another programming language you should have a basic understanding of general data types such as strings, integers, and arrays. R and Python offer the standard set of data types, but both have one data type in common—the data frame—which truly gives them power. On the surface, a data frame is just a way to hold tabular data (the type of data you see organized in a typical Excel spreadsheet) and may feel like a two-dimensional (2D) array. If you dig a bit deeper, though, you will find that these data frames are really an all-in-one combination of a database table, matrix, 2D array, and pivot table with many additional time-saving features. Much like a database table, each column in a data frame has a column name and holds elements of the same type of data. You can perform operations on whole columns, rows, or subsets of each. Adding, merging, flattening, expanding, changing, deleting, and searching for data are all—usually—one-line operations in both languages, as are methods to read and write the contents of data frames to and from files. In essence, Python and R achieve this expressive power by putting intelligence into the data structure and the functions that operate on them. In contrast, other programming languages have less sophisticated data structures, meaning you need to write your own code and create your own data structures to achieve similar results. The following code (Listings 2-1 and 2-2) provides a compact overview of data frame operations on both R and Python, respectively, but it is still highly recommended that you check out the aforementioned introductory resources before moving into Chapter 3. As indicated in the Introduction, you can find all code on the book’s companion website at www.wiley.com/go/datadrivensecurity.

LISTING 2-1 # Listing 2-1 # R Data Frame Example # create a new data frame of hosts & high vuln counts assets.df ${DIR}/readme.md ls -lR ${DIR} else echo "Directory "${DIR}" already exists" fi

LISTING 2-4 REM Listing 2-4 REM Sample Analysis Preparation Script (Windows Shell Script) SET PDIR=%1 IF EXIST %%PDIR GOTO HAVEPDIR MKDIR %%PDIR MKDIR %%PDIR\R MKDIR %%PDIR\data MKDIR %%PDIR\docs MKDIR %%PDIR\output MKDIR %%PDIR\python MKDIR %%PDIR\support MKDIR %%PDIR\tmp %%PDIR\readme.md DIR %%PDIR :HAVEPDIR ECHO "Directory exists"

You now only need to type prep NAME whenver you want to start a new project (so, for this project, you type prep ch02). As you develop your own styles and patterns, you can expand this script to include the generation of various templates and even initialization of source code repositories. Once the structure is in place, it’s time to retrieve, explore, and analyze some data!

Summary Python and R are key components of a security data scientist’s toolbox. Python’s similarity to existing scripting languages; its large and supportive community; its diverse data manipulation capabilities; and recent additions of robust statistics, graphics, and computational packages make it an excellent choice for many kinds of analytics work. R’s statistical foundations, equally large and supportive contributor base, robust library of packages, and growing popularity within the analytics community make it one of the “must learn/use” languages for data science tasks. While it’s possible to work with standard/base installations

c02.indd

11:15:24:AM 12/31/2013 Page 37

37

38

BUILDING YOUR ANALYTICS TOOLBOX

of each language, using specialized development environments will enable you to focus on your analysis work instead of system administration tasks. The “data frame” is an “intelligent data structure” that is behind much of the power of both R and Python’s data crunching capabilities. It combines the capabilities of a database, pivot table, matrix, and spreadsheet, and we’ll be introducing more features of data frames in the next chapter as we walk you through the basic framework of a security data analysis project.

Recommended Reading The following are some recommended readings that can further your understanding on some of the topics we touch on in this chapter. For full information on these recommendations and for the sources we cite in the chapter, please see Appendix B. The R Book by Michael J. Crawley—One of the most comprehensive R texts that provides examples but also serves as a complete R reference book. Learning R by Richard Cotton—This provides an excellent conversational introduction to the R programming language through numerous step-by-step examples. Learn Python the Hard Way by Zed A. Shaw—Pressure makes diamonds out of coal, and the disciplined nature of the text and exercises requiring actual typing to complete will have you going from “0” to “Python” in short order if you can stick with it. Learning Python by Mark Lutz—If the brutal nature of Learn Python the Hard Way is a bit much for you, this text offers a more traditional approach to getting acclimated to the Python ecosystem.

c02.indd

11:15:24:AM 12/31/2013 Page 38

3 Learning the “Hello World” of Security Data Analysis “From one thing, know ten thousand things.” Miyamoto Musashi, The Book of Five Rings

c03.indd

01:15:15:PM 01/06/2014

Page 39

40

LEARNING THE “HELLO WORLD” OF SECURITY DATA ANALYSIS

If you’ve ever tried to learn a new programming language there’s a good chance you started off with a “Hello World” example that quickly introduces basic language structure and code execution. The immediate sense of accomplishment as the syntax is verified by the compiler/interpreter and the familiar two-word output is displayed becomes a catalyst for the notion that, soon, you shall have the ability to bend this new language to your will. This chapter takes the “Hello World” concept and expands it to a walk-through of a self-contained, introductory security data analysis use case that you will be able to follow along with, execute, and take concepts from as you start to perform your own analyses. There are parallel examples in Python and R to provide a somewhat agnostic view of the similarities, strengths, and differences between both languages in a real-life data analysis context. If you’re not familiar with one or both of those languages, you should read Chapter 2 and at least skim some of the external resources referenced there. This is a good place to reinforce the recommendation to use IPython Notebooks or RStudio for your analyses and exploration because they provide very robust and forgiving environments, which means you will be much more productive compared to the alternative of writing, saving, and executing scripts within the bare interpreter shells. Remember, all the source code, sample data, and visualizations are on the book’s website (www.wiley.com/go/datadrivensecurity), so there’s no need for transcription. You can just cut/paste and focus on the flow and concepts presented in the examples. Listings 3-0 and 3-1 provide you with the setup code for this chapter.

LISTING 3-0 # This is for the R code in the chapter # set working directory to chapter location # (change for where you set up files in ch 2) setwd("~/book/ch03") # make sure the packages for this chapter # are installed, install if necessary pkg