With the rise of DevOps, low-cost cloud computing, and container technologies, the way Java developers approach development today has changed dramatically. This practical guide helps you take advantage of microservices, serverless, and cloud native technologies using the latest DevOps techniques to simplify your build process and create hyperproductive teams.
Stephen Chin, Melissa McKay, Ixchel Ruiz, and Baruch Sadogursky help you evaluate an array of options. The list includes source control with Git, build declaration with Maven and Gradle, CI/CD with CircleCI, package management with Artifactory, containerization with Docker and Kubernetes, and much more. Whether you’re building applications with Jakarta EE, Spring Boot, Dropwizard, MicroProfile, Micronaut, or Quarkus, this comprehensive guide has you covered.
Explore software lifecycle best practices
Use DevSecOps methodologies to facilitate software development and delivery
Understand the business value of DevSecOps best practices
Manage and secure software dependencies
Develop and deploy applications using containers and cloud native technologies
Manage and administrate source control repositories and development processes
Use automation to set up and administer build pipelines
Identify common deployment patterns and antipatterns
Maintain and monitor software after deployment
About the Author
Stephen Chin is Head of Developer Relations at JFrog and author of The Definitive Guide to Modern Client Development, Raspberry Pi with Java, and Pro JavaFX Platform. He has keynoted numerous Java conferences around the world including Devoxx, JNation, JavaOne, Joker, and Open Source India. Stephen is an avid motorcyclist who has done evangelism tours in Europe, Japan, and Brazil, interviewing hackers in their natural habitat. When he is not traveling, he enjoys teaching kids how to do embedded and robot programming together with his teenage daughter. You can follow his hacking adventures at: http://steveonjava.com/.
Melissa McKay is currently a Developer Advocate with the JFrog Developer Relations team. She has been active in the software industry 20 years and her background and experience spans a slew of technologies and tools used in the development and operation of enterprise products and services. Melissa is a mom, software developer, Java geek, huge promoter of Java UNconferences, and is always on the lookout for ways to grow, learn, and improve development processes. She is active in the developer community, has spoken at CodeOne, Java Dev Day Mexico and assists with organizing the JCrete and JAlba Unconferences as well as Devoxx4Kids events.
Ixchel Ruiz has developed software applications and tools since 2000. Her research interests include Java, dynamic languages, client-side technologies, and testing. She is a Java Champion, Groundbreaker Ambassador, Hackergarten enthusiast, open source advocate, JUG leader, public speaker, and mentor.
Baruch Sadogursky (a.k.a JBaruch) is the Chief Sticker Officer @JFrog (also, Head of DevOps Advocacy) at JFrog. His passion is speaking about technology. Well, speaking in general, but doing it about technology makes him look smart, and 19 years of hi-tech experience sure helps. When he’s not on stage (or on a plane to get there), he learns about technology, people and how they work, or more precisely, don’t work together.
He is a co-author of the Liquid Software book, a CNCF ambassador and a passionate conference speaker on DevOps, DevSecOps, digital transformation, containers and cloud-native, artifact management and other topics, and is a regular at the industry’s most prestigious events including DockerCon, Devoxx, DevOps Days, OSCON, Qcon, JavaOne and many others. You can see some of his talks at jfrog.com/shownotes
See: Continuous Delivery for Java Apps: Build a CD Pipeline Step by Step Using Kubernetes, Docker, Vagrant, Jenkins, Spring, Maven and Artifactory, Publisher : Leanpub (December 14, 2017)
This book will guide you through the implementation of the real-world Continuous Delivery using top-notch technologies. Instead of finishing this book thinking “I know what Continuous Delivery is, but I have no idea how to implement it”, you will end up with your machine set up with a Kubernetes cluster running Jenkins Pipelines in a distributed and scalable fashion (each Pipeline run on a new Jenkins slave dynamically allocated as a Kubernetes pod) to test (unit, integration, acceptance, performance and smoke tests), build (with Maven), release (to Artifactory), distribute (to Docker Hub) and deploy (on Kubernetes) a Spring Boot app to testing, staging and production environments implementing the Canary Release deployment pattern.
TABLE OF CONTENTS:
INTRODUCTION Agile Scrum Scrum and Continuous Integration Deployed vs Released Scrum and Continuous Delivery XP and Continuous Delivery Automated Tests Continuous Integration Feature Branch Continuous Delivery Continuous Delivery Pipeline Continuous Delivery vs Continuous Deployment Canary Release A/B Tests Feature Flags
NOTEPAD APP: AUTOMATED TESTS, MAVEN AND FLYWAY Pre-Requisites The Notepad Application Automated Tests Unit Tests Integration Tests Acceptance Tests Page Object Distributed Acceptance Tests with Selenium-Grid Smoke Tests Performance Tests with Gatling.io Apache Maven Maven Snapshot vs Release The Default Lifecycle and its Phases Maven Repositories Repository Manager (Artifactory) Maven Plugins: Surefire and Failsafe Maven Profile Running Unit Tests Running Integration Tests Running Acceptance Tests Running Smoke Tests Running Performance Tests Publish Artifacts to Artifactory with Maven Publish a Snapshot to Artifactory Publish a Release to Artifactory The release:prepare Goal The release:perform Goal Flyway
DOCKER Introduction to Docker Difference Between Container and Image Docker Hub Create your Account Official Docker Repositories Image Tags Non-Official Docker Images Create a Repository, an Image and Push it to Docker Hub Running Containers on Docker Running Containers as Daemons Container Clean Up Naming Containers Exposing Ports Persistent Data with Volumes Environment Variables Docker Networking Create a Bridge Network Container Static IP Address Linking Containers Most Used Docker Commands Images Containers Misc Building Docker Images: Dockerfile
JENKINS: PIPELINE AS CODE AND CHATOPS Jenkins Overview Jenkins Concepts Job (or Project) Build Artifact Workspace Executor Plugin Node, Master, and Agent (or Slave) ChatOps Create a Slack Workspace Integrate Slack with Jenkins Slack Notification Plugin Use Hubot to Interact with Jenkins Jenkins Pipeline Declarative Pipeline vs Scripted Pipeline Scripted Pipeline Using Docker with Jenkins Pipelines Running Docker from Within the Jenkins Container Scaling Jenkins with Slaves
KUBERNETES Why Kubernetes? Set up a Kubernetes Cluster using Vagrant Hands-on Introduction to Kubernetes Kubernetes Concepts Namespaces Pods Labels Replica Sets Services Service Discovery using DNS Service Discovery using Namespaces Volumes Handling External Configurations Config Maps Changing Logback Log Level at Runtime Secrets Using Secrets as Environment Variables Using Secrets as Files from a Pod Deployments Readiness Probes Liveness Probes Canary Release Kubernetes Architecture Kubernetes Master Components Etcd API Server Controller Manager Scheduler Kubernetes Node Components Service Proxy Kubelet cAdvisor Kubernetes Add-ons Web UI (Dashboard) Monitoring Kubernetes with Heapster, InfluxDB and Grafana Web UI Overview DNS
“A DevOps toolchain is a set or combination of tools that aid in the delivery, development, and management of software applications throughout the systems development life cycle, as coordinated by an organization that uses DevOps practices.
“In software, a toolchain is the set of programming tools that is used to perform a complex software development task or to create a software product, which is typically another computer program or a set of related programs. In general, the tools forming a toolchain are executed consecutively so the output or resulting environment state of each tool becomes the input or starting environment for the next one, but the term is also used when referring to a set of related tools that are not necessarily executed consecutively.[3][4][5]
As DevOps is a set of practices that emphasizes the collaboration and communication of both software developers and other information technology (IT) professionals, while automating the process of software delivery and infrastructure changes, its implementation can include the definition of the series of tools used at various stages of the lifecycle; because DevOps is a cultural shift and collaboration between development and operations, there is no one product that can be considered a single DevOps tool. Instead a collection of tools, potentially from a variety of vendors, are used in one or more stages of the lifecycle.[6][7]” (WP)
Plan is composed of two things: “define” and “plan”.[8] This activity refers to the business value and application requirements. Specifically “Plan” activities include:
Tools and vendors in this category often overlap with other categories. Because DevOps is about breaking down silos, this is reflective in the activities and product solutions.[clarification needed]
Verify
Verify is directly associated with ensuring the quality of the software release; activities designed to ensure code quality is maintained and the highest quality is deployed to production.[8] The main activities in this are:
Packaging refers to the activities involved once the release is ready for deployment, often also referred to as staging or Preproduction / “preprod”.[8] This often includes tasks and activities such as:
Approval/preapprovals
Package configuration
Triggered releases
Release staging and holding
Release
Release related activities include schedule, orchestration, provisioning and deploying software into production and targeted environment.[9] The specific Release activities include:
Configure activities fall under the operation side of DevOps. Once software is deployed, there may be additional IT infrastructure provisioning and configuration activities required.[8] Specific activities including:
Infrastructure storage, database and network provisioning and configuring
Monitoring is an important link in a DevOps toolchain. It allows IT organization to identify specific issues of specific releases and to understand the impact on end-users.[8] A summary of Monitor related activities are:
Information from monitoring activities often impacts Plan activities required for changes and for new release cycles.
Version Control
Version Control is an important link in a DevOps toolchain and a component of software configuration management. Version Control is the management of changes to documents, computer programs, large web sites, and other collections of information.[8] A summary of Version Control related activities are:
Non-linear development
Distributed development
Compatibility with existent systems and protocols
Toolkit-based design
Information from Version Control often supports Release activities required for changes and for new release cycles.
^ Garner Market Trends: DevOps – Not a Market, but Tool-Centric Philosophy That supports a Continuous Delivery Value Chain (Report). Gartner. 18 February 2015.
^ abcdefg Avoid Failure by Developing a Toolchain that Enables DevOps (Report). Gartner. 16 March 2016.
^ Best Practices in Change, Configuration and Release Management (Report). Gartner. 14 July 2010.
^ Roger S. Pressman (2009). Software Engineering: A Practitioner’s Approach (7th International ed.). New York: McGraw-Hill.
Maven addresses two aspects of building software: how software is built, and its dependencies. Unlike earlier tools like Apache Ant, it uses conventions for the build procedure, and only exceptions need to be written down. An XML file describes the software project being built, its dependencies on other external modules and components, the build order, directories, and required plug-ins. It comes with pre-defined targets for performing certain well-defined tasks such as compilation of code and its packaging. Maven dynamically downloads Java libraries and Maven plug-ins from one or more repositories such as the Maven 2 Central Repository, and stores them in a local cache.[2] This local cache of downloaded artifacts can also be updated with artifacts created by local projects. Public repositories can also be updated.
Maven is built using a plugin-based architecture that allows it to make use of any application controllable through standard input. A C/C++ native plugin is maintained for Maven 2.[3]
Alternative technologies like Gradle and sbt as build tools do not rely on XML, but keep the key concepts Maven introduced. With Apache Ivy, a dedicated dependency manager was developed as well that also supports Maven repositories.[4]
The number of artifacts on Maven’s central repository has grown rapidly
Maven, created by Jason van Zyl, began as a sub-project of Apache Turbine in 2002. In 2003, it was voted on and accepted as a top level Apache Software Foundation project. In July 2004, Maven’s release was the critical first milestone, v1.0. Maven 2 was declared v2.0 in October 2005 after about six months in beta cycles. Maven 3.0 was released in October 2010 being mostly backwards compatible with Maven 2.
Maven 3.0 information began trickling out in 2008. After eight alpha releases, the first beta version of Maven 3.0 was released in April 2010. Maven 3.0 has reworked the core Project Builder infrastructure resulting in the POM’s file-based representation being decoupled from its in-memory object representation. This has expanded the possibility for Maven 3.0 add-ons to leverage non-XML based project definition files. Languages suggested include Ruby (already in private prototype by Jason van Zyl), YAML, and Groovy.
Special attention was given to ensuring backward compatibility of Maven 3 to Maven 2. For most projects, upgrading to Maven 3 will not require any adjustments of their project structure. The first beta of Maven 3 saw the introduction of a parallel build feature which leverages a configurable number of cores on a multi-core machine and is especially suited for large multi-module projects.
A directory structure for a Java project auto-generated by Maven
Maven projects are configured using a Project Object Model, which is stored in a pom.xml-file. An example file looks like:
<project><!-- model version is always 4.0.0 for Maven 2.x POMs --><modelVersion>4.0.0</modelVersion><!-- project coordinates, i.e. a group of values which uniquely identify this project --><groupId>com.mycompany.app</groupId><artifactId>my-app</artifactId><version>1.0</version><!-- library dependencies --><dependencies><dependency><!-- coordinates of the required library --><groupId>junit</groupId><artifactId>junit</artifactId><version>3.8.1</version><!-- this dependency is only used for running and compiling tests --><scope>test</scope></dependency></dependencies></project>
This POM only defines a unique identifier for the project (coordinates) and its dependency on the JUnit framework. However, that is already enough for building the project and running the unit tests associated with the project. Maven accomplishes this by embracing the idea of Convention over Configuration, that is, Maven provides default values for the project’s configuration.
The directory structure of a normal idiomatic Maven project has the following directory entries:
Directory name
Purpose
project home
Contains the pom.xml and all subdirectories.
src/main/java
Contains the deliverable Java sourcecode for the project.
src/main/resources
Contains the deliverable resources for the project, such as property files.
src/test/java
Contains the testing Java sourcecode (JUnit or TestNG test cases, for example) for the project.
src/test/resources
Contains resources necessary for testing.
The command mvn package will compile all the Java files, run any tests, and package the deliverable code and resources into target/my-app-1.0.jar (assuming the artifactId is my-app and the version is 1.0.)
Using Maven, the user provides only configuration for the project, while the configurable plug-ins do the actual work of compiling the project, cleaning target directories, running unit tests, generating API documentation and so on. In general, users should not have to write plugins themselves. Contrast this with Ant and make, in which one writes imperative procedures for doing the aforementioned tasks.
A Project Object Model (POM) provides all the configuration for a single project. General configuration covers the project’s name, its owner and its dependencies on other projects. One can also configure individual phases of the build process, which are implemented as plugins. For example, one can configure the compiler-plugin to use Java version 1.5 for compilation, or specify packaging the project even if some unit tests fail.
Larger projects should be divided into several modules, or sub-projects, each with its own POM. One can then write a root POM through which one can compile all the modules with a single command. POMs can also inherit configuration from other POMs. All POMs inherit from the Super POM[7] by default. The Super POM provides default configuration, such as default source directories, default plugins, and so on.
Most of Maven’s functionality is in plug-ins. A plugin provides a set of goals that can be executed using the command mvn [plugin-name]:[goal-name]. For example, a Java project can be compiled with the compiler-plugin’s compile-goal[8] by running mvn compiler:compile.
There are Maven plugins for building, testing, source control management, running a web server, generating Eclipse project files, and much more.[9] Plugins are introduced and configured in a <plugins>-section of a pom.xml file. Some basic plugins are included in every project by default, and they have sensible default settings.
However, it would be cumbersome if the archetypal build sequence of building, testing and packaging a software project required running each respective goal manually:
mvn compiler:compile
mvn surefire:test
mvn jar:jar
Maven’s lifecycle concept handles this issue.
Plugins are the primary way to extend Maven. Developing a Maven plugin can be done by extending the org.apache.maven.plugin.AbstractMojo class. Example code and explanation for a Maven plugin to create a cloud-based virtual machine running an application server is given in the article Automate development and management of cloud virtual machines.[10]
The build lifecycle is a list of named phases that can be used to give order to goal execution. One of Maven’s standard lifecycles is the default lifecycle, which includes the following phases, in this order:[11]
validate
generate-sources
process-sources
generate-resources
process-resources
compile
process-test-sources
process-test-resources
test-compile
test
package
install
deploy
Goals provided by plugins can be associated with different phases of the lifecycle. For example, by default, the goal “compiler:compile” is associated with the “compile” phase, while the goal “surefire:test” is associated with the “test” phase. When the mvn test command is executed, Maven runs all goals associated with each of the phases up to and including the “test” phase. In such a case, Maven runs the “resources:resources” goal associated with the “process-resources” phase, then “compiler:compile”, and so on until it finally runs the “surefire:test” goal.
Maven also has standard phases for cleaning the project and for generating a project site. If cleaning were part of the default lifecycle, the project would be cleaned every time it was built. This is clearly undesirable, so cleaning has been given its own lifecycle.
Standard lifecycles enable users new to a project the ability to accurately build, test and install every Maven project by issuing the single command mvn install. By default, Maven packages the POM file in generated JAR and WAR files. Tools like diet4j[12] can use this information to recursively resolve and run Maven modules at run-time without requiring an “uber”-jar that contains all project code.
A central feature in Maven is dependency management. Maven’s dependency-handling mechanism is organized around a coordinate system identifying individual artifacts such as software libraries or modules. The POM example above references the JUnit coordinates as a direct dependency of the project. A project that needs, say, the Hibernate library simply has to declare Hibernate’s project coordinates in its POM. Maven will automatically download the dependency and the dependencies that Hibernate itself needs (called transitive dependencies) and store them in the user’s local repository. Maven 2 Central Repository[2] is used by default to search for libraries, but one can configure the repositories to be used (e.g., company-private repositories) within the POM.
The fundamental difference between Maven and Ant is that Maven’s design regards all projects as having a certain structure and a set of supported task work-flows (e.g., getting resources from source control, compiling the project, unit testing, etc.). While most software projects in effect support these operations and actually do have a well-defined structure, Maven requires that this structure and the operation implementation details be defined in the POM file. Thus, Maven relies on a convention on how to define projects and on the list of work-flows that are generally supported in all projects.[13]
There are search engines such as The Central Repository Search Engine[14] which can be used to find out coordinates for different open-source libraries and frameworks.
Projects developed on a single machine can depend on each other through the local repository. The local repository is a simple folder structure that acts both as a cache for downloaded dependencies and as a centralized storage place for locally built artifacts. The Maven command mvn install builds a project and places its binaries in the local repository. Then other projects can utilize this project by specifying its coordinates in their POMs.
Add-ons to several popular integrated development environments targeting the Java programming language exist to provide integration of Maven with the IDE’s build mechanism and source editing tools, allowing Maven to compile projects from within the IDE, and also to set the classpath for code completion, highlighting compiler errors, etc. Examples of popular IDEs supporting development with Maven include:
These add-ons also provide the ability to edit the POM or use the POM to determine a project’s complete set of dependencies directly within the IDE.
Some built-in features of IDEs are forfeited when the IDE no longer performs compilation. For example, Eclipse’s JDT has the ability to recompile a single Java source file after it has been edited. Many IDEs work with a flat set of projects instead of the hierarchy of folders preferred by Maven. This complicates the use of SCM systems in IDEs when using Maven.[15][16][17]
A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer‘s operating system in a consistent manner.[1]
A package manager deals with packages, distributions of software and data in archive files. Packages contain metadata, such as the software’s name, description of its purpose, version number, vendor, checksum (preferably a cryptographic hash function), and a list of dependencies necessary for the software to run properly. Upon installation, metadata is stored in a local package database. Package managers typically maintain a database of software dependencies and version information to prevent software mismatches and missing prerequisites. They work closely with software repositories, binary repository managers, and app stores.
Package managers are designed to eliminate the need for manual installs and updates. This can be particularly useful for large enterprises whose operating systems are typically consisting of hundreds or even tens of thousands of distinct software packages.[2]
Functions
Illustration of a package manager being used to download new software. Manual actions can include accepting a license agreement or selecting some package-specific configuration options.
A software package is an archive file containing a computer program as well as necessary metadata for its deployment. The computer program can be in source code that has to be compiled and built first.[3] Package metadata include package description, package version, and dependencies (other packages that need to be installed beforehand).
Package managers are charged with the task of finding, installing, maintaining or uninstalling software packages upon the user’s command. Typical functions of a package management system include:
Grouping packages by function to reduce user confusion
Managing dependencies to ensure a package is installed with all packages it requires, thus avoiding “dependency hell“
Challenges with shared libraries
Computer systems that rely on dynamic library linking, instead of static library linking, share executable libraries of machine instructions across packages and applications. In these systems, complex relationships between different packages requiring different versions of libraries results in a challenge colloquially known as “dependency hell“. On Microsoft Windows systems, this is also called “DLL hell” when working with dynamically linked libraries. Good package management is vital on these systems.[4] The Framework system from OPENSTEP was an attempt at solving this issue, by allowing multiple versions of libraries to be installed simultaneously, and for software packages to specify which version they were linked against.
Front-ends for locally compiled packages
System administrators may install and maintain software using tools other than package management software. For example, a local administrator may download unpackaged source code, compile it, and install it. This may cause the state of the local system to fall out of synchronization with the state of the package manager’s database. The local administrator will be required to take additional measures, such as manually managing some dependencies or integrating the changes into the package manager.
There are tools available to ensure that locally compiled packages are integrated with the package management. For distributions based on .deb and .rpm files as well as Slackware Linux, there is CheckInstall, and for recipe-based systems such as Gentoo Linux and hybrid systems such as Arch Linux, it is possible to write a recipe first, which then ensures that the package fits into the local package database.[citation needed]
Maintenance of configuration
Particularly troublesome with software upgrades are upgrades of configuration files. Since package managers, at least on Unix systems, originated as extensions of file archiving utilities, they can usually only either overwrite or retain configuration files, rather than applying rules to them. There are exceptions to this that usually apply to kernel configuration (which, if broken, will render the computer unusable after a restart). Problems can be caused if the format of configuration files changes; for instance, if the old configuration file does not explicitly disable new options that should be disabled. Some package managers, such as Debian‘s dpkg, allow configuration during installation. In other situations, it is desirable to install packages with the default configuration and then overwrite this configuration, for instance, in headless installations to a large number of computers. This kind of pre-configured installation is also supported by dpkg.
Repositories
To give users more control over the kinds of software that they are allowing to be installed on their system (and sometimes due to legal or convenience reasons on the distributors’ side), software is often downloaded from a number of software repositories.[5]
Upgrade suppression
When a user interacts with the package management software to bring about an upgrade, it is customary to present the user with the list of actions to be executed (usually the list of packages to be upgraded, and possibly giving the old and new version numbers), and allow the user to either accept the upgrade in bulk, or select individual packages for upgrades. Many package managers can be configured to never upgrade certain packages, or to upgrade them only when critical vulnerabilities or instabilities are found in the previous version, as defined by the packager of the software. This process is sometimes called version pinning.
For instance:
yum supports this with the syntax exclude=openoffice*[6]
pacman with IgnorePkg= openoffice[7] (to suppress upgrading openoffice in both cases)
dpkg and dselect support this partially through the hold flag in package selections
APT extends the hold flag through the complex “pinning” mechanism[8] (Users can also blacklist a package[9])
portage supports this through the package.mask configuration file
Cascading package removal
Some of the more advanced package management features offer “cascading package removal”,[7] in which all packages that depend on the target package and all packages that only the target package depends on, are also removed.
Comparison of commands
Although the commands are specific for every particular package manager, they are to a large extent translatable, as most package managers offer similar functions.
emerge -avtuDN --with-bdeps=y @world or emerge --update --pretend @world
delete orphans+config
zypper rm -u
pacman -Rsn $(pacman -Qdtq)
apt autoremove
dnf erase PKG
emerge --depclean
show orphans
zypper pa --orphaned --unneeded
pacman -Qdt
package-cleanup --quiet --leaves --exclude-bin
emerge -caD or emerge --depclean --pretend
update all
zypper up
pacman -Syu
apt upgrade
dnf update
emerge --update --deep --with-bdeps=y @world
The Arch Linux Pacman/Rosetta wiki offers an extensive overview.[11]
Prevalence
Package managers like dpkg have existed as early as 1994.[12]
Linux distributions oriented to binary packages rely heavily on package management systems as their primary means of managing and maintaining software. Mobile operating systems such as Android (Linux-based), iOS (Unix-like), and Windows Phone rely almost exclusively on their respective vendors’ app stores and thus use their own dedicated package management systems.
A package manager is often called an “install manager”, which can lead to a confusion between package managers and installers. The differences include:This box:
It is entirely at the discretion of the installer. It could be a file within the app’s folder, or among the operating system’s files and folders. At best, they may register themselves with an uninstallers list without exposing installation information.
There could be as many formats as the number of apps
Package format compatibility
Can be consumed as long as the package manager supports it. Either newer versions of the package manager keep supporting it or the user does not upgrade the package manager.
The installer is always compatible with its archive format, if it uses any. However, installers, like all computer programs, may be affected by software rot.
Comparison with build automation utility
Most software configuration management systems treat building software and deploying software as separate, independent steps. A build automation utility typically takes human-readable source code files already on a computer, and automates the process of converting them into a binary executable package on the same computer. Later a package manager typically running on some other computer downloads those pre-built binary executable packages over the internet and installs them.
However, both kinds of tools have many commonalities:
For example, the dependency graphtopological sorting used in a package manager to handle dependencies between binary components is also used in a build manager to handle the dependency between source components.
For example, many makefiles support not only building executables, but also installing them with make install.
For example, every package manager for a source-based distribution – Portage, Sorcery, Homebrew, etc. – supports converting human-readable source code to binary executables and installing it.
A few tools, such as Maak and A-A-P, are designed to handle both building and deployment, and can be used as either a build automation utility or as a package manager or both.[13]
Common package managers and formats
Universal package manager
Also known as binary repository manager, it is a software tool designed to optimize the download and storage of binary files, artifacts and packages used and produced in the software development process.[14] These package managers aim to standardize the way enterprises treat all package types. They give users the ability to apply security and compliance metrics across all artifact types. Universal package managers have been referred to as being at the center of a DevOps toolchain.[15]
Each package manager relies on the format and metadata of the packages it can manage. That is, package managers need groups of files to be bundled for the specific package manager along with appropriate metadata, such as dependencies. Often, a core set of utilities manages the basic installation from these packages and multiple package managers use these utilities to provide additional functionality.
For example, yum relies on rpm as a backend. Yum extends the functionality of the backend by adding features such as simple configuration for maintaining a network of systems. As another example, the Synaptic Package Manager provides a graphical user interface by using the Advanced Packaging Tool (apt) library, which, in turn, relies on dpkg for core functionality.
By the nature of free and open source software, packages under similar and compatible licenses are available for use on a number of operating systems. These packages can be combined and distributed using configurable and internally complex packaging systems to handle many permutations of software and manage version-specific dependencies and conflicts. Some packaging systems of free and open source software are also themselves released as free and open source software. One typical difference between package management in proprietary operating systems, such as Mac OS X and Windows, and those in free and open source software, such as Linux, is that free and open source software systems permit third-party packages to also be installed and upgraded through the same mechanism, whereas the package managers of Mac OS X and Windows will only upgrade software provided by Apple and Microsoft, respectively (with the exception of some third party drivers in Windows). The ability to continuously upgrade third party software is typically added by adding the URL of the corresponding repository to the package management’s configuration file.
Beside the system-level application managers, there are some add-on package managers for operating systems with limited capabilities and for programming languages in which developers need the latest libraries.
In contrast to system-level package managers, application-level package managers focus on a small part of the software system. They typically reside within a directory tree that is not maintained by the system-level package manager, such as c:\cygwin or /usr/local/fink. However, this might not be the case for the package managers that deal with programming libraries, leading to a possible conflict as both package managers may claim to “own” a file and might break upgrades.
Impact
Ian Murdock had commented that package management is “the single biggest advancement Linux has brought to the industry”, that it blurs the boundaries between operating system and applications, and that it makes it “easier to push new innovations […] into the marketplace and […] evolve the OS”.[16]