the Chromium logo

The Chromium Projects

Chrome OS User-Land Boot Design

Introduction

This document gives an overview of the design of user-land boot processes for Chrome OS based systems. It covers what happens after /sbin/init first starts execution, until all services in the system are ready.

Chrome OS Core uses Upstart for its /sbin/init package. Readers of this document should have some basic familiarity with Upstart concepts, such as the syntax of job configuration files, and Upstart events, such as starting and started. Readers should also have some understanding of Linux file system management concepts, like mounting and creating file systems.

If you want to go deeper and read the source code, you’ll need some understanding of shell programming. If you want to understand the initialization of an individual package, you’ll need to be familiar with that package’s basic operations. Neither of these topics is covered in this document.

Summary of Boot Flow

Phases of Boot

Boot est omnis divisa in partes tres.

image

The diagram above shows an outline of the Chrome OS Core boot flow. Boot divides into three sequential phases with four publicly defined Upstart events:

Upstart Event(s) Phase Description
startup Basic Services Initialization of basic services needed by the rest of the system.
started boot-services System Application Initialization of the system application and other services critical to the product’s UX
started system-services System Services Initialization of all other services.
started failsafe Failsafe Service The failsafe job is guaranteed to start even if the system application fails.

Basic Services Startup

“Basic services” are the indispensable services required by the system application and the rest of the system. Most services cannot operate until these basic services are running normally.

At the end of basic services startup (that is, when Upstart emits started boot-services), the following services are guaranteed available:

Note that jobs that run during this phase must by their nature work in an environment where the services above may not be available. At the beginning of this phase, there is no guarantee of writable storage; jobs cannot log messages, write any files, or read files outside of the root filesystem. Additionally, device availability is not guaranteed in all cases; devices may not be initialized, and their drivers may not even be loaded.

System Application Startup

Chrome OS Core is predicated on the existence of a single, dedicated application that is central to system startup. For Chrome OS, that application is the Chrome browser. The Chrome OS sources are structured to allow a platform to choose to use a different system application.

The system application starts in parallel with various critical services. Typically, “critical” means that the system application expects the service to be present, or that the user would perceive the system as not functional without it. For example, in Chrome OS these services include:

The system application receives special treatment in the boot flow. The application is responsible for emitting the event which starts the boot-complete job, which allows system services startup to begin. The system application is responsible for this for two reasons:

For Chrome OS, boot-complete starts when the Chrome browser has displayed the initial login screen.

System Services Startup

This phase of boot includes anything not required in the earlier phases of boot. Generally, all jobs in this phase run in parallel. Because the system is considered already booted, this phase is not performance critical.

After the started system-services event, services can rely on the following:

By design, system services only start if the system application announces its successful initialization. For most services, this is sufficient: without a system application, the system is in a failed state and many services have no useful purpose. However some services may require that they start eventually, even if the system as a whole has failed. To support this requirement, a service can depend on the started failsafe event. After started boot-services, the failsafe-delay job starts a 30 second timer. The started failsafe event occurs at the earlier of started system-services or the expiration of the timer.

Failsafe jobs are commonly used in test and development images for services necessary to debug or recover from failures. Some examples:

Filesystem Initialization and Layout

Creating the Stateful File System (the chromeos_startup script)

At startup, there is no writable file system storage available. During basic services startup, the script chromeos_startup is responsible for mounting file systems, and creating the basic directory hierarchy. The script is invoked from the startup job, which starts with the startup event emitted by /sbin/init at the start of the userland boot process.

The script creates missing directories and mount points as necessary. This is done to support re-creating the filesystem after stateful wipe, and also to protect the filesystem from inadvertent damage. Below is a list of the principle directories, listed in the order they’re processed.

Encrypted Stateful

As a security measure to protect private data, the portions of the stateful partition under /var and /home/chronos are mounted from an encrypted blob stored in the stateful partition. The encrypted storage is kept in /mnt/stateful_partition/encrypted. Mounting and unmounting the encrypted data is handled by the mount-encrypted command.

Not all Chrome OS Core platforms use this feature: The feature requires a TPM to be effective. Moreover, the feature presupposes some specific privacy requirements. For Chrome OS, these requirements apply; they may not be applicable on all platforms. When the feature is not used, /var and /home/chronos are simple bind mounts.

Special Boot Flows

Chrome OS Core supports a number of special boot flows, many of which are designed to handle system behaviors across multiple boots. This includes special cases for installation, recovery, and update.

Boot Alert Messages

Some special flows involve presenting simple messages to the user prior during basic services startup. All such flows are detected and handled from chromeos_startup. The flows involve one of two use cases:

Because these messages are presented during basi services startup, a special script called chromeos-boot-alert is used to accommodate the restricted operating environment:

Stateful Wipe (a.k.a. “Powerwash”)

In certain boot cases, chromeos_startup must erase and re-create the stateful partition. The script triggers stateful wipe for one of three conditions:

Principally, wiping the stateful partition means re-creating an empty filesystem using mkfs. Prior to creating the file system, the wipe procedure will also erase data in the partition in order to prevent leaking any private data that was present prior to the wipe.

If the wipe procedure is performed while in dev mode, the .developer_mode file will be created after the wipe. This marks that the system has successfully wiped stateful after the conversion to dev mode.

After the wipe procedure is complete, the system reboots; on reboot the system will rebuild directories and mount points in the stateful file system.

Interactions with Install, Recovery, and Update

After new software is installed by chromeos-install (including device recovery), or automatic update, a reboot is necessary for the new software to take effect. The first reboot after installing new software triggers special handling, depending on the specific installation use case.

Rollback Protection After Update

If a catastrophic system failure manifests immediately after an automatic update, Chrome OS Core can detect the failure and roll back to the previously installed software. The mechanism relies on a three-way interaction with the update_engine service, the firmware, and the system application.

Each kernel GPT partition on the boot disk contains three special attributes flags, which the firmware uses to select which kernel to boot. The flags are designated “priority”, “successful”, and “tries”. The firmware selects the kernel based on these flags; the full rules are described in the design document for the disk format. The update_engine service updates these flags with specific values after applying an update, and again with different values after the system boots without failure.

After update_engine has finished downloading and installed a new image, the update’s post-install phase marks the GPT flags for the new kernel to have the highest priority for boot, with “tries” set to 5, and “successful” set to 0. In this state, the partition will be selected by the firmware as the first kernel to boot.

On each boot, the firmware decrements the “tries” setting for the new kernel in the GPT. If “tries” is decremented to 0 and “successful” is never set to 1, the firmware will roll back to the previous working software. To prevent the rollback, the system must mark the kernel as good by setting the “tries” count to 0 and the “successful” flag to 1 sometime after booting.

Chrome OS Core does not automatically mark the kernel good immediately after boot. Instead, when the update_engine service starts, it sets a 45 second timer. When the timer expires, update_engine invokes a script called chromeos-setgoodkernel. That script marks the kernel as good, after which rollback is no longer possible.

The update_engine service depends on started system-services. Consequently, rollback will be triggered if a newly updated image consistently fails in any of the following ways:

Note that the user will typically have to forcibly power cycle a unit multiple times in any failure case other than a kernel panic.

A consequence of this design is that if the system application is crashing at startup, the system cannot receive new updates. This is a deliberate design choice: It is not a bug. The rationale is that if a system is failing in this way, rollback is by far the most preferable way to recover. Waiting for an update is unlikely to be useful in most cases:

Installation and Recovery

The chromeos-install script can be used to install an image from scratch on a device. The end-user recovery procedure is a wrapper around this script. Additionally, developers can run the script manually when they boot a custom image from USB in dev mode. Although the UX in the two cases is quite different, the system state after installation in either case is indistinguishable.

The installation process includes creating a stateful partition file system, similar to the process for stateful wipe. As with stateful wipe, installation is responsible for creating the .developer_mode file when installing in dev mode. Also as with stateful wipe, after rebooting the system will rebuild directories and mount points in the stateful file system.

Stateful Update and Install

For developer and test builds of Chrome OS, substantial portions of the software are installed in /usr/local, a bind mount over /mnt/stateful_partition/dev_image. This content is not delivered via the autoupdate mechanism. Instead, a script called stateful_update is used to update it separately.

The stateful update procedure operates in two stages:

Chrome OS - Starting Chrome (the “ui” Job)

The Chrome browser is the system application for Chrome OS. The ui job is responsible for starting, managing, and restarting the Chrome OS session_manager process. The session_manager process in turn is responsible for managing the Chrome browser process.

Job Startup Flow

The ui job is responsible for starting session_manager. That program is responsible for the following:

Once Chrome has successfully displayed the initial startup screen, it kicks off the following sequence of events:

Handling Chrome Shutdown, Crashes, and Restarts

When a user logs out of Chrome, the browser process terminates normally. In turn, session_manager terminates, which ends the ui job. In normal operation, the ui job must respawn when this happens. However, in the event of a Chrome crash, the automatic respawn logic is more complex than Upstart’s standard respawn behavior. Instead, whenever the ui job stops, a separate ui-respawn job determines what to do according to these rules:

Special handling for abnormal termination is subject to rate limitations:

If a failure isn’t handled by respawning or rebooting, the ui job stops, but the system stays up. The user can manually force power-off, if desired.

Upstart Events During Chrome Startup

Below is a list of various Upstart events triggered by Chrome browser startup, and when they happen:

Event When it happens
starting ui This first occurs at boot after started boot-services. After boot, the event will re-occur every time after a user logs out, or any time Chrome restarts after a crash.
x-started session_manager emits this event after the X server finishes initializing. This happens after every Chrome restart.
login-prompt-visible session_manager emits this event after Chrome announces that has finished displaying its login screen. This happens after every Chrome restart.
started boot-complete This occurs the first time that login-prompt-visible is emitted after boot.
start-user-session session_manager emits this event after Chrome announces the start of a user session.
stopping ui This occurs whenever a user logs out, if the Chrome browser crashes, or when the system is shutting down.

Performance Considerations

The Boot Critical Path

Each phase of boot is dominated by one long running step. These steps must run sequentially, so together they make up a critical path. For Chrome OS, these are the longest running steps in the critical path:

chromeos_startup

X server startup until session_manager emits the x-started event.

Chrome browser startup, until login-prompt-visible

Adding time directly to the critical path will result in delaying boot by an equivalent time. Whenever possible, initialization should run in parallel with the critical path.

ureadahead

System startup time is heavily influenced by the wait time required to read data that isn’t yet in the file buffer cache. The ureadahead program can improve boot time by requesting data in advance of when it’s needed, so that boot spends less time waiting for data from the boot device. Some key facts about ureadahead:

Impact of Cached Data

During boot, various processes may cache data under /var, in order to make the next time’s startup faster. However, from time to time these cached data may need to be invalidated (i.e. removed), resulting in a one-time slower boot. Below is a list of known caches affecting boot time, and a summary of what causes them to be invalidated.

Measuring Performance

There’s a web site for this.

Boot performance is measured by capturing timestamps at specific moments during the boot flow. These are some of the key events reported by the tools:

bootstat name keyval name Meaning
pre-startup seconds_kernel_to_startup Start of chromeos_startup
post-startup seconds_kernel_to_startup_done End of chromeos_startup
x-started seconds_kernel_to_x_started X server startup complete
chrome-exec seconds_kernel_to_chrome_exec Session manager fork/exec of the Chrome browser process
boot-complete seconds_kernel_to_login Start of the boot-complete Upstart job

All of these events are part of the critical path.

Kernel Performance Accounting Anomalies

For kernel startup, capturing a time stamp at the exact moment of some key transitions is inconvenient. The result is that certain events that ought to be ascribed to one phase (e.g. kernel startup) are ascribed to the adjacent phase:

What Matters, What Doesn’t Matter

Not every change to the boot flow matters to boot performance. Changes may happen to flows that run in parallel with the critical path without slowing it down. Even if a change measurably affects boot time, software engineering considerations such as maintainability or readability may matter more than performance. Below are guidelines for evaluating whether a change might impact boot time, and whether the impact is truly significant.

The Critical Path Matters

As noted, time added in sequence with the critical path is time added directly to boot time. When work executes in parallel with the critical path, the boot time impact will be much smaller. In the best case, work in parallel will consume resources that would otherwise be idle, meaning boot time is unaffected. Even if jobs in parallel compete for resources with the critical path, the impact is typically smaller than the total resource requirement of the extra work.

Evaluating Significance

Performance tuning frequently requires trade-offs between a simpler, more maintainable design, and the best possible performance. Moreover, even when tuning doesn’t have a maintenance impact, the cost to make and test a code change may exceed the benefit of the performance improvement. Finally, the boot time numbers tend to very noisy; small changes are likely not to be statistically significant.

Use these rules when evaluating what trade-offs may be necessary for performance:

Note that these guidelines apply equally to improvements and regressions: Just as 50 ms slower isn’t a regression, 50 ms faster isn’t an improvement.

Design Principles

The principles in this section are meant to improve the maintainability of packages that deliver Upstart jobs.

Put Jobs in the Right Package

An Upstart job should ideally be associated with a specific service, live in the source code repository for that service, and be delivered in the same package as the service. This has multiple advantages:

Depend on the Public Events

For Chrome OS Core, there are four public Upstart events with well-defined, stable semantics. In order of preference, from most to least preferred, they are started system-services, started failsafe, started boot-services, and startup. A package that delivers an Upstart job should depend on the most preferred public job that will meet its requirements.

Note that a job should depend on only one of the four events; depending on more than one event is redundant:

Other than the four public events, Upstart jobs and events generated by the chromeos-base/chromeos-init package should be considered private to the package, and subject to change at any time. If your job depends on a private event, changes to chromeos-init could cause your job to start at the wrong time, or not start at all.

If your package depends on chromeos-base/chromeos-login, it can also depend on the standard Chrome startup events.

If your package delivers more than one job, it is safe and reasonable for some jobs to depend on others in the same package instead of depending on the public events.

Where possible, jobs should not depend jobs delivered by other packages, because the inter-package coupling can create maintenance hazards. However, if two packages are already coupled for sound technical reasons, it’s reasonable for one of the package to depend on the other’s Upstart jobs. For examples, look at the tree of jobs for starting the network (see init/shill.conf in src/platform/shill) or cryptohome services (see init/cryptohomed.conf in src/platform/cryptohome).

Depend on Jobs, not Events

Generally, start and stop conditions for jobs should be based on job events like started and stopped, rather than starting or stopping jobs with initctl emit, start, or stop. Depending on a job makes it easier to find the sources of events, and trace their flow through the source code.

If you must use initctl emit, start, or stop, follow these guidelines to help readability:

Create Your Own Storage

If your service requires writable storage (e.g. a directory in /var/lib), your service is responsible for creating the files and directories. Typically, this can happen in a pre-start script in the Upstart job for the service. Work to create this storage should belong to the package that owns the service. The work should not be handled in chromeos_startup, or an Upstart job not related to your package. The work must happen at boot: In general, there is no way to initialize writable storage at build time.

The objective is that if your package isn’t part of a distribution, it shouldn’t create vestiges in the file systems of distributions that don’t need it.

Note that packages can and should rely on directories specified in the Linux FHS. If a location specified in the standard in the standard is missing, code to create it should be added to chromeos_startup.

Runtime Resource Limits

Every service should limit their runtime resource usage whenever possible. Limits are important to help prevent memory or resource leaks in services from freezing the system. Upstart has directives to control these.

Here are the important ones:

Source Code and ebuilds

The table below shows the package names and source code repository paths for key behaviors described in this document.

Package Source Repo Description
chromeos-base/chromeos-init src/platform2/init Basic boot flow
chromeos-base/chromeos-login src/platform2/login_manager Chrome startup
chromeos-base/update_engine src/platform/update_engine Rollback support
chromeos-base/chromeos-installer src/platform2/installer Install/Recovery
chromeos-base/chromeos-assets src/platform/assets Boot message texts

Simple Upstart Job Recipes

Simple one-time script

You need to run a short script once after boot. You don’t care about running the command if the system application fails at startup.

Pattern your job after this:

start on started system-services

script

# … initialization commands ...

end script

Simple daemon

You need a daemon to start after boot. If the daemon dies it should restart. The daemon stores working data in a directory under /var/lib.

Pattern your job after this:

start on started system-services

stop on stopping system-services

respawn

pre-start script

mkdir -p /var/lib/my-daemon

end script

exec my-daemon

Many daemons require expect fork or expect daemon in addition to respawn. Consult your friendly neighborhood Upstart guru for advice.

Failsafe service

You have a service that’s needed for administrative or debug access to your device. Your service can start after the system application, so that it won’t slow down boot. However, if the system applications fails, your service is vital to connecting to the failed unit in order to repair or debug it.

Pattern your job after this:

start on started failsafe

stop on stopping failsafe

respawn

exec my-important-administrative-daemon

Service Required by the System Application

When your system application starts, it connects to a service provided by a separate daemon that must be started in its own job. The system application can’t finish initialization without this second service, and will wait until it’s ready.

Pattern your job after this:

start on started boot-services stop on stopping boot-services respawn exec my-important-daemon

Initialization That Blocks Chrome Startup

You have system-wide initialization that must finish before Chrome can start. No other services have a dependency.

Pattern your job after this:

start on starting ui

task

script

# perform your important initialization here

end script

File System Initialization Required by Multiple Services

You have a service that provides a shared resource in the file system (e.g. a directory, a named FIFO, etc.). You need to make sure that the file system resource is created before any service depending on started boot-services.

Pattern your job after this:

start on starting boot-services

task

script

create-my-important-resource

end script

FAQ

Creating a New Chrome OS Core Platform

Q: What Upstart jobs must my platform supply?

A: The platform must supply a boot-complete job conforming to certain basic requirements.

Q: What are the requirements for the boot-complete job?

A: The job must have a start stanza that will start once the system application is up and providing service. Your platform can define “providing service” any way it wants, but generally the job shouldn’t start until these criteria are met:

For reference, on Chrome OS, after stripping out comments and boilerplate, the boot-complete job consists of just this one line:

start on login-prompt-visible

Q: How do I provide a boot-complete.conf for my board?

A: boot-complete.conf is provided by virtual/chromeos-bootcomplete. You can override this virtual in your board overlay.

Q: How do I disable/enable the encrypted stateful feature?

A: The feature is enabled by default. To disable the feature, set USE=”-encrypted_stateful”.

Adding a New Service

Q: What repository should hold my Upstart job?

A: Ideally, the job should live in the source repository for the service it starts. If there’s no such repository (e.g. your service is an upstream package), create a new package to install the job. For an example, see the chromeos-base/openssh-server-init package.

Q: When should I use start on started system-services?

A: This is the preferred way to start any job that should run once at boot time. However, if your job is important to starting the system application, you may need to depend on boot-services instead.

Q: When should I use start on started failsafe?

A: Use this for a job that can start after the system application, but that must run eventually even if the system application never starts. This start condition is most useful for services that are needed to debug or recover a failed unit.

Q: When should I use start on started boot-services?

A: Use this when the job must start in parallel with the system application. This is typically required for one of these reasons:

Note that by running in parallel with the system application, the job may slow down system boot.

Q: Can my service just use start on startup?

A: Most services should avoid doing this, because of the restrictions it imposes:

References

Upstart reference: http://upstart.ubuntu.com/cookbook/

The ureadahead man page: http://manpages.ubuntu.com/manpages/precise/man8/ureadahead.8.html