See: Exam Ref AZ-500 Microsoft Azure Security Technologies
The Amnesic Incognito Live System
Tails, or The Amnesic Incognito Live System, is a security-focused Debian-based Linux distribution aimed at preserving privacy and anonymity. All its incoming and outgoing connections are forced to go through Tor, and any non-anonymous connections are blocked. The system is designed to be booted as a live DVD or live USB, and will leave no digital footprint on the machine unless explicitly told to do so. The Tor Project provided financial support for its development in the beginnings of the project. Tails comes with UEFI Secure Boot.
Tails was first released on 23 June 2009. It is the next iteration of development on Incognito, a discontinued Gentoo-based Linux distribution. The Tor Project provided financial support for its development in the beginnings of the project. Tails also received funding from the Open Technology Fund, Mozilla, and the Freedom of the Press Foundation.
- GNOME desktop
- Tor (anonymity network) with Stream isolation, regular, obfs3 and obfs4 bridges support.
- NetworkManager for easy network configuration
- Tor Browser, a web browser based on Mozilla Firefox and modified to protect anonymity with:
- HTTPS Everywhere transparently enables SSL-encrypted connections to a great number of major websites
- uBlock Origin to remove advertisements.
Note: Due to the fact that Tails includes uBlock Origin (compared to the normal Tor Browser Bundle), it could be subject to an attack to determine if the user is using Tails (since the userbase for Tails is less than the Tor Browser Bundle) by checking if the website is blocking advertising. Although this can be avoided by disabling uBlock Origin.
- Pidgin preconfigured with OTR for end-to-end encrypted instant messaging
- OnionShare for anonymous File sharing
- Thunderbird email client with Enigmail for OpenPGP support
- Liferea feed aggregator
- Aircrack-ng for Wi-Fi networks auditing
- Electrum, an easy-to-use bitcoin client
Kali Linux has around 600 pre-installed penetration-testing programs(tools), including Armitage (a graphical cyber attack management tool), Nmap (a port scanner), Wireshark (a packet analyzer), metasploit (penetration testing framework, awarded as the best penetration testing software), John the Ripper (a password cracker), sqlmap (automatic SQL injection and database takeover tool), Aircrack-ng (a software suite for penetration-testing wireless LANs), Burp suite and OWASP ZAP web application security scanners, etc.
It was developed by Mati Aharoni and Devon Kearns of Offensive Security through the rewrite of BackTrack, their previous information security testing Linux distribution based on Knoppix. Originally, it was designed with a focus on kernel auditing, from which it got its name Kernel Auditing Linux. The name is sometimes incorrectly assumed to come from Kali the Hindu goddess. The third core developer, Raphaël Hertzog, joined them as a Debian expert.
Kali Linux’s popularity grew when it was featured in multiple episodes of the TV series Mr. Robot. Tools highlighted in the show and provided by Kali Linux include Bluesniff, Bluetooth Scanner (btscanner), John the Ripper, Metasploit Framework, Nmap, Shellshock, and Wget.“
“Debian (/ˈdɛbiən/), also known as Debian GNU/Linux, is a Linux distribution composed of free and open-source software, developed by the community-supported Debian Project, which was established by Ian Murdock on August 16, 1993. The first version of Debian (0.01) was released on September 15, 1993, and its first stable version (1.1) was released on June 17, 1996. The Debian Stable branch is the most popular edition for personal computers and servers. Debian is also the basis for many other distributions, most notably Ubuntu.
Debian is one of the oldest operating systems based on the Linux kernel. The project is coordinated over the Internet by a team of volunteers guided by the Debian Project Leader and three foundational documents: the Debian Social Contract, the Debian Constitution, and the Debian Free Software Guidelines. New distributions are updated continually, and the next candidate is released after a time-based freeze.
Since its founding, Debian has been developed openly and distributed freely according to the principles of the GNU Project. Because of this, the Free Software Foundation sponsored the project from November 1994 to November 1995. When the sponsorship ended, the Debian Project formed the nonprofit organization Software in the Public Interest to continue financially supporting development.”
“A Linux distribution (often abbreviated as distro) is an operating system made from a software collection that is based upon the Linux kernel and, often, a package management system. Linux users usually obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices (for example, OpenWrt) and personal computers (for example, Linux Mint) to powerful supercomputers (for example, Rocks Cluster Distribution).
A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system (the most common being the X Window System), a window manager, and a desktop environment.
Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Usually, Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers.
A Linux distribution may also be described as a particular assortment of application and utility software (various GNU tools and libraries, for example), packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users. The software is usually adapted to the distribution and then packaged into software packages by the distribution’s maintainers. The software packages are available online in so-called repositories, which are storage locations usually distributed around the world. Beside glue components, such as the distribution installers (for example, Debian-Installer and Anaconda) or the package management systems, there are only very few packages that are originally written from the ground up by the maintainers of a Linux distribution.
Almost one thousand Linux distributions exist. Because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, servers, laptops, netbooks, mobile phones and tablets, as well as minimal environments typically for use in embedded systems. There are commercially-backed distributions, such as Fedora (Red Hat), openSUSE (SUSE) and Ubuntu (Canonical Ltd.), and entirely community-driven distributions, such as Debian, Slackware, Gentoo and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions (such as Gentoo) are distributed mostly in source code form and compiled locally during installation.“
According to Cyber Risk Analytics‘ “2019 Midyear Quick View Data Breach Report,” the first half of 2019 saw more than 3,800 publicly disclosed breaches with more than 4.1 billion records exposed. This figure represents a 54% increase over reported breaches and a 52% increase in the number of compromised records compared with the same time frame in 2018. More than 60% of the reported breaches were the result of human error, highlighting an ever-increasing need for cybersecurity education, as well as highly skilled and trained cybersecurity professionals.
According to a Cyber Seek report, the number of cybersecurity job openings in the U.S. stands at almost 313,735, with nearly 716,000 cybersecurity professionals employed in today’s workforce. Projections continue to be robust further out: CSO expects that number to hit 500,000 by 2021, with more than 3 million cybersecurity jobs open worldwide that same year.
When evaluating prospective InfoSec candidates, employers frequently look to certification as an important measure of excellence and commitment to quality. We examined five InfoSec certifications we consider to be leaders in the field of information security today:
- CEH: Certified Ethical Hacker
- CISM: Certified Information Security Manager
- CompTIA Security+ and CompTIA PenTest+
- CISSP: Certified Information Systems Security Professional
- CISA: Certified Information Security Auditor
This year’s list includes entry-level credentials, such as Security+, as well as more advanced certifications, such as the CEH, CISSP, CISM and CISA. We also offer some additional certification options in the last section that cover choices outside our top five, because the field of information security is both wide and varied, with many other options. According to Cyber Seek, more employers are seeking CISA, CISM and CISSP certification holders than there are credential holders which makes these credentials a welcome addition to any certification portfolio.
Absent from our list of the top five is the SANS GIAC Security Essentials (GSEC). The GSEC is still a very worthy credential, but the job board numbers for the CISA were so solid that it merited a spot in the top five.
Security-related job roles cover a lot of ground, such as information security specialist, security analyst, network security administrator, system administrator (with security as a responsibility) and security engineer, as well as specialized roles like malware engineer, intrusion analyst and penetration tester.
Average salaries for information security specialists and security engineers – two of the most common job roles – vary depending on the source. For example, Simply Hired reports $30,263 for specialist positions, whereas Glassdoor’s national average is almost $68,000. For security engineers, Simply Hired reports almost $95,000, while Glassdoor’s average is more than $131,000, with salaries on the high end reported at $144,000.
If you’re serious about advancing your career in the IT field and are interested in specializing in security, certification is a great choice. It’s an effective way to validate your skills and show a current or prospective employer that you’re qualified and properly trained.
Before examining the details of the top five InfoSec certifications, check results from our informal job board survey. It reports the number of job posts nationwide in which our featured certs were mentioned on a given day. This should give you an idea of the relative popularity of each certification.
Job board search results (in alphabetical order, by cybersecurity certification)
|Certification||Simply Hired||Indeed||LinkedIn Jobs||TechCareers||Total|
Beyond the top 5: More cybersecurity certifications
In addition to these must-have credentials, there are many other certifications available to fit the career needs of any IT professional interested in information security.
While it didn’t make the top five this year, the SANS GIAC Security Essentials (GSEC) remains an excellent entry-level credential for IT professionals seeking to demonstrate that they understand information security terminology and concepts but also possess skills and technical expertise necessary to occupy “hands-on” security roles.
If you find incident response and investigation intriguing, check out the Logical Operations CyberSec First Responder (CFR) certification. This ANSI-accredited and U.S. DoDD-8570 compliant credential recognizes security professionals who can design secure IT environments, perform threat analysis, and respond appropriately and effectively to cyberattacks. Logical Operations offers other certifications, including the Master Mobile Application Developer (MMAD), Certified Virtualization Professional (CVP), Certified Cyber Secure Coder and CloudMASTER.
There are many other certifications to explore or keep your eye on. The associate-level Cisco CCNA Cyber Ops certification is aimed at those who work as analysts in security operations centers (SOCs) in large companies and organizations. Candidates who qualify through the Cisco’s global scholarship program may receive free training, mentoring and testing to help them achieve the CCNA Cyber Ops certification. The CompTIA Cybersecurity Analyst (CySA+), which launched in 2017, is a vendor-neutral certification designed for professionals with three to four years of security and behavioral analytics experience.
The Identity Management Institute (IMI) offers several credentials for identity and access management, data protection, identity protection, identity governance, and more. The IAPP, which focuses on privacy, has a small but growing number of certifications as well.
The SECO-Institute, in cooperation with the Security Academy Netherlands and EXIN, is behind the Cyber Security & Governance Certification Program, an up-and-coming European option that may be headed for the U.S. in the next year or two.
Finally, it may be worth your time to browse the Chartered Institute of Information Security accreditations, which are the U.K. equivalent of the U.S. DoDD 8570 certifications and the corresponding 8140 framework.
Return to Timeline of the History of Computers
By Andrew Hoffman
While many resources for network and IT security are available, detailed knowledge regarding modern web application security has been lacking—until now. This practical guide provides both offensive and defensive security concepts that software engineers can easily learn and apply.
Andrew Hoffman, a senior security engineer at Salesforce, introduces three pillars of web application security: recon, offense, and defense. You’ll learn methods for effectively researching and analyzing modern web applications—including those you don’t have direct access to. You’ll also learn how to break into web applications using the latest hacking techniques. Finally, you’ll learn how to develop mitigations for use in your own web applications to protect against hackers.
- Explore common vulnerabilities plaguing today’s web applications
- Learn essential hacking techniques attackers use to exploit applications
- Map and document web applications for which you don’t have direct access
- Develop and deploy customized exploits that can bypass common defenses
- Develop and deploy mitigations to protect your applications against hackers
- Integrate secure coding best practices into your development lifecycle
- Get practical tips to help you improve the overall security of your web applications
From the Preface
Web Application Security walks you through a number of techniques used by talented hackers and bug bounty hunters to break into applications, then teaches you the techniques and processes you can implement in your own software to protect against such hackers.
This book is designed to be read from cover to cover, but can also be used as an on-demand reference for particular types of recon techniques, attacks, and defenses against attacks. Ultimately, this book is written to aid the reader in becoming better at web application security in a way that is practical, hands-on, and follows a logical progression such that no significant prior security experience is required.
Prerequisite Knowledge and Learning Goals
This is a book that will not only aid you in learning how to defend your web application against hackers, but will also walk you through the steps hackers take in order to investigate and break into a web application. Throughout this book we will discuss many techniques that hackers are using today to break into web applications hosted by corporations, governments, and occasionally even hobbyists. Following sufficient investigation into the previously mentioned techniques, we begin a discussion on how to secure web applications against these hackers.
In doing so you will discover brand new ways of thinking about application architecture. You will also learn how to integrate security best practices into an engineering organization. Finally, we will evaluate a number of techniques for defending against the most common and dangerous types of attacks that occur against web applications today.
After completing Web Application Security you will have the required knowledge to perform recon techniques against applications you do not have code-level access to. You will also be able to identify threat vectors and vulnerabilities in web applications, and craft payloads designed to compromise application data, interrupt execution flow, or interfere with the intended function of a web application. With these skills in hand, and the knowledge gained from the final section on securing web applications, you will be able to identify risky areas of a web application’s codebase and understand how to write code to defend against attacks that would otherwise leave your application and its users at risk.
The potential audience for this book is quite broad, but the style in which the book is written and how the examples are structured should make it ideal for anyone with an intermediary-level background in software engineering.
Minimum Required Skills
In this book, an “intermediary-level background in software engineering” implies the following:
- You can write basic CRUD (create, read, update, delete) programs in at least one programming language.
- You can write code that runs on a server somewhere (such as backend code).
- You know what HTTP is, and can make, or at least read, GET/POST calls over HTTP in some language or framework.
- You can write, or at least read and understand, applications that make use of both server-side and client-side code, and communicate between the two over HTTP.
- You are familiar with at least one popular database (MySql, MongoDB, etc.).
These skills represent the minimum criteria for successfully following the examples in this book. Any experience you have beyond these bullet points is a plus and will make this book that much easier for you to consume and derive educational value from.
About the Author
- Publication date : March 2, 2020
- Print length : 331 pages
- Publisher : O’Reilly Media; 1st edition (March 2, 2020)
- ASIN : B085FW7J86
Who Benefits Most from Reading This Book?
Prerequisite skills aside, I believe it is important to clarify who will benefit from this book the most, so I’d like to explain who my target audience is. To do so I have structured this section in terms of learning goals and professional interests. If you don’t fit into one of the following categories, you can still learn many valuable or at least interesting concepts from this book.
This book was written to stand the test of time, so if you decide later on to pursue one of the occupations in its target audience, all of the knowledge from this book should still be relevant.
Software Engineers and Web Application Developers
I believe it would be fair to say that the primary audience for this book is an early- to mid-career software engineer or web application developer. Ideally, this reader is interested in gaining a deep understanding of either offensive techniques used by hackers, or defensive techniques used by security engineers to defend against hackers.
Often the titles “web application developer” and “software engineer” are interchangeable, which might lead to a bit of confusion considering I use both of them throughout the upcoming chapters. Let’s start off with some clarification.
In my mind, and for the sake of clarity, when I use the term “software engineer,” I am referring to a generalist who is capable of writing software that runs on a variety of platforms. Software engineers will benefit from this book in several ways.
First off, much of the knowledge contained in this book is transferable with minimal effort to software that does not run on the web. It is also transferable to other types of networked applications, with native mobile applications being the first that come to mind.
Furthermore, several exploits discussed in this book take advantage of server-side integrations involving communication with a web application and another software component. As a result, it is safe to consider any software that interfaces with a web application as a potential threat vector (databases, CRM, accounting, logging tools, etc.).
Web application developers
On the other hand, a “web application developer” by my definition is someone who is highly specialized in writing software that runs on the web. They are often further subdivided into frontend, backend, and full stack developers.
Historically, many attacks against web applications have targeted server-side vulnerabilities. As a result I believe this book’s use case for a backend or full stack developer is very transparent and easily understood.
As I explain in the upcoming chapters, many of the ways in which hackers take advantage of today’s web applications originate via malicious code running in the browser. Some hackers are even taking advantage of the browser DOM or CSS stylesheets in order to attack an application’s users.
These points suggest that it is also important for frontend developers who do not write server-side code to be aware of the security risks their code may expose and how to mitigate those risks.
General Learning Goals
This book should be a fantastic resource for any of the preceding looking to make a career change to a more security-oriented role. It will also be valuable for those looking to learn how to beef up the defenses in their own code or in the code maintained by their organization.
If you want to defend your application against very specific exploits, this book is also for you. This book follows a unique structure, which should enable you to use it as a security reference without ever having to read any of the chapters that involve hacking. That is, of course, if that is your only goal in purchasing this book.
I would suggest reading from cover to cover for the best learning experience, but if you are looking only for a reference on securing against specific types of hacks, just flip the book halfway open and get started reading.
Security Engineers, Pen Testers, and Bug Bounty Hunters
As a result of how this book is structured, it can also be used as a resource for penetration testing, bug bounty hunting, and any other type of application-level security work. If this type of work is relevant or interesting to you, then you may find the first half of the book more to your liking.
This book will take a deep dive into how exploits work from both a code level and an architectural level rather than simply executing well-known open source software (OSS) scripts or making use of paid security automation software. Because of this there is a second audience for this book — software security engineers, IT security engineers, network security engineers, penetration testers, and bug bounty hunters.
Want to make a little bit of extra money on the side while developing your hacking skills? Read this book and then sign up for one of the bug bounty programs noted in Part III. This is a great way to help other companies improve the security of their products while developing your hacking skills and making some additional cash.
This book will be very beneficial to existing security professionals who understand conceptually how many attacks work but would like a deep dive into the systems and code behind a tool or script.
In today’s security world, it is commonplace for penetration testers to operate using a wide array of prebuilt exploit scripts. This has led to the creation of many paid and open source tools that automate classic attacks, and attacks that can be easily run without deep knowledge regarding the architecture of an application or the logic within a particular block of code.
The exploits and countermeasures contained within this book are presented without the use of any specialized tools. Instead, we will rely on our own scripts, network requests, and the tooling that comes standard in Unix-based operating systems, as well as the standard tooling present in the three major web browsers (Chrome, Firefox, and Edge).
This is not to take away from the value of specialized security tools. In fact, I think that many of them are exceptional and make delivering professional, high-quality penetration tests much easier!
Instead, the reason this book does not contain the use of specialized security tools is so that we can focus on the most important parts of finding a vulnerability, developing an exploit, prioritizing data to compromise, and making sure you can defend against all of the above. As a result, I believe that by the end of this book you will be prepared to go out into the wild and find new types of vulnerabilities, develop exploits against systems that have never been exploited before, and harden the most complex systems against the most persistent attackers.
How Is This Book Organized?
You will soon find that this book is structured quite differently than most other technology books out there. This is intentional. This book is purposefully structured so that there is a nearly 1:1 ratio of chapters regarding hacking (offense) and security (defense).
After beginning our adventure with a bit of a history lesson and some exploration into the technology, tools, and exploits of the past, we will move on to our main topic: exploitation and countermeasures for modern web applications. Hence the subtitle of this book.
The main content in this book is structured into three major parts, with each part containing many individual chapters covering a wide array of topics. Ideally, you will venture through this book in a linear fashion, from page one all the way to the final page. Reading this book in that order will provide the greatest learning possible. As mentioned earlier, this book can also be used as either a hacking reference or a security engineering reference by focusing on the first or second half, respectively.
By now you should understand how to navigate the book, so let’s go over the three main parts of this book so we can grasp the importance of each.
The first part of this book is “Recon,” where we evaluate ways to gain information regarding a web application without necessarily trying to hack it.
In “Recon,” we discuss a number of important technologies and concepts that are essential to master if you wish to become a hacker. These topics will also be important to anyone looking to lock down an existing application, because the information exposed by many of these techniques can be mitigated with appropriate planning.
I have had the opportunity to work with what I believe to be some of the best penetration testers and bug bounty hunters in the world. Through my conversations with them and my analysis of how they do their work, I’ve come to realize this topic is much more important than many other books make it out to be.
Why is recon important?
I would go so far as to say that for many of the top bug bounty hunters in the world, expert-level reconnaissance ability is what differentiates these “great” hackers from simply “good” hackers.
In other words, it’s one thing to have a fast car (in this case, perhaps knowing how to build exploits), but without knowing the most efficient route to the finish line, you may not win the race. A slower car could make it to the finish line in less time than a fast one if a more efficient path is taken.
If fantasy-based analogies hit closer to home, you could think of recon skills as something akin to a rogue in an RPG. In our case, the rogue’s job isn’t to do lots of damage, but instead to scout ahead of the group and circle back with intel. It’s the guy who helps line up the shots and figures out which battles will have the greatest rewards.
The last part in particular is exceedingly valuable, because it’s likely many types of attacks could be logged against well-defended targets. This means you might only get one chance to exploit a certain software hole before it is found and closed.
We can safely conclude that the second use of reconnaissance is figuring out how to prioritize your exploits.
If you are interested in a career as a penetration tester or a bug bounty hunter, this part of the book will be of utmost importance to you. This is largely because in the world of bug bounty hunting, and to a lesser extent penetration testing, tests are performed “black box” style. “Black box” testing is a style of testing where the tester has no knowledge of the structure and code within an app, and hence must build their own understanding of the application through careful analysis and investigation.
The second part of this book is “Offense.” Here the focus of the book moves from recon and data gathering to analyzing code and network requests. Then with this knowledge we will attempt to take advantage of insecurely written or improperly configured web applications.
A number of chapters in this book explain actual hacking techniques used by malicious black hat hackers in the real world. It is imperative that if you are testing techniques found in this book, you do so only against an application that you own or have explicit written permission to test exploits against.
Improper usage of the hacking techniques presented in this book could result in fines, jail time, etc., depending on your country’s laws on hacking activity.
In Part II, we learn how to both build and deploy exploits. These exploits are designed to steal data or forcibly change the behavior of an application.
This part of the book builds on the knowledge from Part I, “Recon.” Using our previously acquired reconnaissance skills in conjunction with newly acquired hacking skills, we will begin taking over and attacking demo web applications.
Part II is organized on an exploit-by-exploit basis. Each chapter explains in detail a different type of exploit.
These chapters start with an explanation of the exploit itself so you can understand how it works mechanically. Then we discuss how to search for vulnerabilities where this exploit can be applied. Finally, we craft a payload specific to the demo application we are exploiting. We then deploy the payload, and observe the results.
Vulnerabilities considered in depth
Cross-Site Scripting (XSS), one of the first exploits we dig into, is a type of attack that works against a wide array of web applications, but can be applied to other applications as well (e.g., mobile apps, flash/ActionScript games, etc.). This particular attack involves writing some malicious code on your own machine, then taking advantage of poor filtration mechanisms in an app that will allow your script to execute on another user’s machine.
When we discuss an exploit like an XSS attack, we will start with a vulnerable app. This demo app will be straightforward and to the point, ideally just a few paragraphs of code. From this foundation, we will write a block of code to be injected as a payload into the demo app, which will then take advantage of a hypothetical user on the other side.
Sounds simple doesn’t it? And it should be. Without any defenses, most software systems are easy to break into. As a result, with an exploit like XSS where there are many defenses, we will progressively dig deeper and deeper into the specifics of writing and deploying an attack.
We will initially attempt to break down routine defenses and eventually move on to bypassing more advanced defense mechanisms. Remember, just because someone built a wall to defend their codebase doesn’t mean you can’t go over it or underneath it. This is where we will get to use some creativity and find some unique and interesting solutions.
Part II is important because understanding the mindset of a hacker is often vital for architecting secure codebases. It is exceptionally important for any reader interested in hacking, penetration testing, or bug bounty hunting.
The third and final part of this book, “Defense,” is about securing your own code against hackers. In Part III, we go back and look at every type of exploit we covered in Part II and attempt to consider them again with a completely opposite viewpoint. This time, we will not be concentrating on breaking into software systems, but instead attempting to prevent or mitigate the probability that a hacker could break into our systems.
In Part III you will learn how to protect against specific exploits from Part II, in addition to learning general protections that will secure your codebase against a wide variety of attacks. These general protections range from “secure by default” engineering methodologies, to secure coding best practices that can be enforced easily by an engineering team using tests and other simple automated tooling (such as a linter).
Beyond learning how to write more secure code, you will also learn a number of increasingly valuable tricks for catching hackers in the act and improving your organization’s attitude toward software security.
Most chapters in Part III restructured somewhat akin to the hacking chapters in Part II. We begin with an overview of the technology and skills required as we begin preparing a defense against a specific type of attack.
Initially we will prepare a basic-level defense, which should help mitigate attacks but may not always fend off the most persistent hackers. Finally, we will improve our defenses to the point where most, if not all, hacking attempts will be stopped.
At this point, the structure of Part III begins to differ from that of Part II as we discuss trade-offs that result from improving application security. Generally speaking, all measures of improving security will have some type of trade-off outside of security. It may not be your place to make suggestions on what level of risk should be accepted at the cost of your product, but you should be aware of the trade-offs being made.
Often, these trade-offs come in the form of application performance. The more efforts you take to read and sanitize data, the more operations are performed outside of the standard functionality of your application. Hence a secure feature typically requires more computing resources than an insecure feature.
With further operations also comes more code, which means more maintenance, tests, and engineering time. This development overhead to security often comes in the form of logging or monitoring overhead as well.
Finally, some security precautions will come at the cost of reduced usability.
A very simple example of this process of comparing security benefits to their cost, in terms of usability and performance, is a login form. If an error message for an invalid username is displayed to the user when attempting to log in, it becomes significantly easier for a hacker to brute force username/password combinations. This occurs because the hacker no longer has to find a list of active login usernames, as the application will confirm a user account. The hacker simply needs to successfully brute force a few usernames, which can be confirmed and logged for later break-in attempts.
Next, the hacker only needs to brute force passwords rather than username/password combinations, which implies significantly decreased mathematical complexity and takes much less time and resources.
Furthermore, if the application uses an email and password scheme for login rather than a username and password scheme, then we have another problem. A hacker can use this login form to find valid email addresses that can be sold for marketing or spam purposes. Even if precautions are taken to prevent brute forcing, carefully crafted inputs (e.g., email@example.com, firstname.lastname@example.org, email@example.com) can allow the hacker to reverse engineer the schema used for company email accounts and pinpoint the valid accounts of execs for sales or individuals with important access criteria for phishing.
As a result, it is often considered best practice to provide more generic error messages to the user. Of course, this change conflicts with the user experience because more specific error messages are definitely ideal for the usability of your application.
This is a great example of a trade-off that can be made for improved application security, but at the cost of reduced usability. This should give you an idea of the type of trade-offs that are discussed in Part III of this book.
This part of the book is extremely important for any security engineer who wants to beef up their skills, or any software engineer looking at transitioning to a security engineering role. The information presented here will help in architecting and writing more secure applications.
As in Part II, understanding how an application’s security can be improved is a valuable asset for any type of hacker. This is because while routine defenses can often be easily bypassed, more complex defenses require deeper understanding and knowledge to bypass. This is further evidence as to why I suggest reading the book from start to finish.
Although some parts of this book may give you more valuable learning than others, depending on your goals, I doubt any of it will be wasted. Cross-training of this sort is particularly valuable, as each part of the book is just another perspective on the same puzzle.
Return to Timeline of the History of Computers
“In 2014, data breaches touched individuals on a scale not seen before, in terms of both the amount and the sensitivity of the data that was stolen. These hacks served as a wake-up call to the world about the reality of living a digitally dependent way of life—both for individuals and for corporate data masters.”
“Most news coverage of data breaches focused on losses suffered by corporations and government agencies in North America—not because these systems were especially vulnerable, but because laws required public disclosure. High-profile attacks affected millions of accounts with companies including Target (in late 2013), JPMorgan Chase, and eBay. Midway through the year”, it was revealed that the Obama Administration’s United States Office of Personnel Management (OPM) was hacked via out-sourced contractors connected to the Chinese Communist government and “that highly personal (and sensitive) information belonging to 18 million former, current, and prospective federal and military employees had been stolen. Meanwhile, information associated with at least half a billion user accounts at Yahoo! was being hacked, although this information wouldn’t come out until 2016.”
Data from organizations outside the US was no less immune. The European Central Bank, HSBC Turkey, and others were hit. These hacks represented millions of victims across a spectrum of industries, such as banking, government, entertainment, retail, and health. While some of the industry and government datasets ended up online, available to the highest bidder in the criminal underground, many other datasets did not, fueling speculation and public discourse about why and what could be done with such data.
The 2014 breaches also expanded the public’s understanding about the value of certain types of hacked data beyond the traditional categories of credit card numbers, names, and addresses. The November 24, 2014, hack of Sony Pictures, for example, didn’t just temporarily shut down the film studio: the hackers also exposed personal email exchanges, harmed creative intellectual property, and rekindled threats against the studio’s freedom of expression, allegedly in retaliation for the studio’s decision to participate in the release of a Hollywood movie critical of a foreign government.
Perhaps most importantly, the 2014 breaches exposed the generally poor state of software security, best practices, and experts’ digital acumen across the world. The seams between the old world and that of a world with modern, networked technology were not as neatly stitched as many had assumed.”
Since 2014, high-profile data breaches have affected billions of people worldwide.