Welcome to the Yocto Project Documentation
Yocto Project Quick Build
Welcome!
This short document steps you through the process for a typical image build using the Yocto Project. The document also introduces how to configure a build for specific hardware. You will use Yocto Project to build a reference embedded OS called Poky.
Note
The examples in this paper assume you are using a native Linux system running a recent Ubuntu Linux distribution. If the machine you want to use Yocto Project on to build an image (Build Host) is not a native Linux system, you can still perform these steps by using CROss PlatformS (CROPS) and setting up a Poky container. See the Setting Up to Use CROss PlatformS (CROPS) section in the Yocto Project Development Tasks Manual for more information.
You may use version 2 of Windows Subsystem For Linux (WSL 2) to set up a build host using Windows 10 or later, Windows Server 2019 or later. See the Setting Up to Use Windows Subsystem For Linux (WSL 2) section in the Yocto Project Development Tasks Manual for more information.
If you want more conceptual or background information on the Yocto Project, see the Yocto Project Overview and Concepts Manual.
Compatible Linux Distribution
Make sure your Build Host meets the following requirements:
At least 90 Gbytes of free disk space, though much more will help to run multiple builds and increase performance by reusing build artifacts.
At least 8 Gbytes of RAM, though a modern modern build host with as much RAM and as many CPU cores as possible is strongly recommended to maximize build performance.
Runs a supported Linux distribution (i.e. recent releases of Fedora, openSUSE, CentOS, Debian, or Ubuntu). For a list of Linux distributions that support the Yocto Project, see the Supported Linux Distributions section in the Yocto Project Reference Manual. For detailed information on preparing your build host, see the Preparing the Build Host section in the Yocto Project Development Tasks Manual.
Git 1.8.3.1 or greater
tar 1.28 or greater
Python 3.8.0 or greater.
gcc 8.0 or greater.
GNU make 4.0 or greater
If your build host does not meet any of these three listed version requirements, you can take steps to prepare the system so that you can still use the Yocto Project. See the Required Git, tar, Python, make and gcc Versions section in the Yocto Project Reference Manual for information.
Build Host Packages
You must install essential host packages on your build host. The following command installs the host packages based on an Ubuntu distribution:
$ sudo apt install gawk wget git diffstat unzip texinfo gcc build-essential chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev python3-subunit mesa-common-dev zstd liblz4-tool file locales libacl1
$ sudo locale-gen en_US.UTF-8
Note
For host package requirements on all supported Linux distributions, see the Required Packages for the Build Host section in the Yocto Project Reference Manual.
Use Git to Clone Poky
Once you complete the setup instructions for your machine, you need to get a copy of the Poky repository on your build host. Use the following commands to clone the Poky repository.
$ git clone git://git.yoctoproject.org/poky
Cloning into 'poky'...
remote: Counting
objects: 432160, done. remote: Compressing objects: 100%
(102056/102056), done. remote: Total 432160 (delta 323116), reused
432037 (delta 323000) Receiving objects: 100% (432160/432160), 153.81 MiB | 8.54 MiB/s, done.
Resolving deltas: 100% (323116/323116), done.
Checking connectivity... done.
Go to Releases wiki page, and choose a release
codename (such as scarthgap
), corresponding to either the
latest stable release or a Long Term Support release.
Then move to the poky
directory and take a look at existing branches:
$ cd poky
$ git branch -a
.
.
.
remotes/origin/HEAD -> origin/master
remotes/origin/dunfell
remotes/origin/dunfell-next
.
.
.
remotes/origin/gatesgarth
remotes/origin/gatesgarth-next
.
.
.
remotes/origin/master
remotes/origin/master-next
.
.
.
For this example, check out the scarthgap
branch based on the
Scarthgap
release:
$ git checkout -t origin/scarthgap -b my-scarthgap
Branch 'my-scarthgap' set up to track remote branch 'scarthgap' from 'origin'.
Switched to a new branch 'my-scarthgap'
The previous Git checkout command creates a local branch named
my-scarthgap
. The files available to you in that branch
exactly match the repository’s files in the scarthgap
release branch.
Note that you can regularly type the following command in the same directory to keep your local files in sync with the release branch:
$ git pull
For more options and information about accessing Yocto Project related repositories, see the Locating Yocto Project Source Files section in the Yocto Project Development Tasks Manual.
Building Your Image
Use the following steps to build your image. The build process creates an entire Linux distribution, including the toolchain, from source.
Note
If you are working behind a firewall and your build host is not set up for proxies, you could encounter problems with the build process when fetching source code (e.g. fetcher failures or Git failures).
If you do not know your proxy settings, consult your local network infrastructure resources and get that information. A good starting point could also be to check your web browser settings. Finally, you can find more information on the “Working Behind a Network Proxy” page of the Yocto Project Wiki.
Initialize the Build Environment: From within the
poky
directory, run the oe-init-build-env environment setup script to define Yocto Project’s build environment on your build host.$ cd poky $ source oe-init-build-env You had no conf/local.conf file. This configuration file has therefore been created for you with some default values. You may wish to edit it to, for example, select a different MACHINE (target hardware). See conf/local.conf for more information as common configuration options are commented. You had no conf/bblayers.conf file. This configuration file has therefore been created for you with some default values. To add additional metadata layers into your configuration please add entries to conf/bblayers.conf. The Yocto Project has extensive documentation about OE including a reference manual which can be found at: https://docs.yoctoproject.org For more information about OpenEmbedded see their website: https://www.openembedded.org/ ### Shell environment set up for builds. ### You can now run 'bitbake <target>' Common targets are: core-image-minimal core-image-full-cmdline core-image-sato core-image-weston meta-toolchain meta-ide-support You can also run generated QEMU images with a command like 'runqemu qemux86-64' Other commonly useful commands are: - 'devtool' and 'recipetool' handle common recipe tasks - 'bitbake-layers' handles common layer tasks - 'oe-pkgdata-util' handles common target package tasks
Among other things, the script creates the Build Directory, which is
build
in this case and is located in the Source Directory. After the script runs, your current working directory is set to the Build Directory. Later, when the build completes, the Build Directory contains all the files created during the build.Examine Your Local Configuration File: When you set up the build environment, a local configuration file named
local.conf
becomes available in aconf
subdirectory of the Build Directory. For this example, the defaults are set to build for aqemux86
target, which is suitable for emulation. The package manager used is set to the RPM package manager.Tip
You can significantly speed up your build and guard against fetcher failures by using Shared State Cache mirrors and enabling Hash Equivalence. This way, you can use pre-built artifacts rather than building them. This is relevant only when your network and the server that you use can download these artifacts faster than you would be able to build them.
To use such mirrors, uncomment the below lines in your
conf/local.conf
file in the Build Directory:BB_HASHSERVE_UPSTREAM = "hashserv.yocto.io:8687" SSTATE_MIRRORS ?= "file://.* http://cdn.jsdelivr.net/yocto/sstate/all/PATH;downloadfilename=PATH" BB_HASHSERVE = "auto" BB_SIGNATURE_HANDLER = "OEEquivHash"
Start the Build: Continue with the following command to build an OS image for the target, which is
core-image-sato
in this example:$ bitbake core-image-sato
For information on using the
bitbake
command, see the BitBake section in the Yocto Project Overview and Concepts Manual, or see The BitBake Command in the BitBake User Manual.Simulate Your Image Using QEMU: Once this particular image is built, you can start QEMU, which is a Quick EMUlator that ships with the Yocto Project:
$ runqemu qemux86-64
If you want to learn more about running QEMU, see the Using the Quick EMUlator (QEMU) chapter in the Yocto Project Development Tasks Manual.
Exit QEMU: Exit QEMU by either clicking on the shutdown icon or by typing
Ctrl-C
in the QEMU transcript window from which you evoked QEMU.
Customizing Your Build for Specific Hardware
So far, all you have done is quickly built an image suitable for emulation only. This section shows you how to customize your build for specific hardware by adding a hardware layer into the Yocto Project development environment.
In general, layers are repositories that contain related sets of instructions and configurations that tell the Yocto Project what to do. Isolating related metadata into functionally specific layers facilitates modular development and makes it easier to reuse the layer metadata.
Note
By convention, layer names start with the string “meta-“.
Follow these steps to add a hardware layer:
Find a Layer: Many hardware layers are available. The Yocto Project Source Repositories has many hardware layers. This example adds the meta-altera hardware layer.
Clone the Layer: Use Git to make a local copy of the layer on your machine. You can put the copy in the top level of the copy of the Poky repository created earlier:
$ cd poky $ git clone https://github.com/kraj/meta-altera.git Cloning into 'meta-altera'... remote: Counting objects: 25170, done. remote: Compressing objects: 100% (350/350), done. remote: Total 25170 (delta 645), reused 719 (delta 538), pack-reused 24219 Receiving objects: 100% (25170/25170), 41.02 MiB | 1.64 MiB/s, done. Resolving deltas: 100% (13385/13385), done. Checking connectivity... done.
The hardware layer is now available next to other layers inside the Poky reference repository on your build host as
meta-altera
and contains all the metadata needed to support hardware from Altera, which is owned by Intel.Note
It is recommended for layers to have a branch per Yocto Project release. Please make sure to checkout the layer branch supporting the Yocto Project release you’re using.
Change the Configuration to Build for a Specific Machine: The MACHINE variable in the
local.conf
file specifies the machine for the build. For this example, set the MACHINE variable tocyclone5
. These configurations are used: https://github.com/kraj/meta-altera/blob/master/conf/machine/cyclone5.conf.Note
See the “Examine Your Local Configuration File” step earlier for more information on configuring the build.
Add Your Layer to the Layer Configuration File: Before you can use a layer during a build, you must add it to your
bblayers.conf
file, which is found in the Build Directoryconf
directory.Use the
bitbake-layers add-layer
command to add the layer to the configuration file:$ cd poky/build $ bitbake-layers add-layer ../meta-altera NOTE: Starting bitbake server... Parsing recipes: 100% |##################################################################| Time: 0:00:32 Parsing of 918 .bb files complete (0 cached, 918 parsed). 1401 targets, 123 skipped, 0 masked, 0 errors.
You can find more information on adding layers in the Adding a Layer Using the bitbake-layers Script section.
Completing these steps has added the meta-altera
layer to your Yocto
Project development environment and configured it to build for the
cyclone5
machine.
Note
The previous steps are for demonstration purposes only. If you were
to attempt to build an image for the cyclone5
machine, you should
read the Altera README
.
Creating Your Own General Layer
Maybe you have an application or specific set of behaviors you need to
isolate. You can create your own general layer using the
bitbake-layers create-layer
command. The tool automates layer
creation by setting up a subdirectory with a layer.conf
configuration file, a recipes-example
subdirectory that contains an
example.bb
recipe, a licensing file, and a README
.
The following commands run the tool to create a layer named
meta-mylayer
in the poky
directory:
$ cd poky
$ bitbake-layers create-layer meta-mylayer
NOTE: Starting bitbake server...
Add your new layer with 'bitbake-layers add-layer meta-mylayer'
For more information on layers and how to create them, see the Creating a General Layer Using the bitbake-layers Script section in the Yocto Project Development Tasks Manual.
Where To Go Next
Now that you have experienced using the Yocto Project, you might be asking yourself “What now?”. The Yocto Project has many sources of information including the website, wiki pages, and user manuals:
Website: The Yocto Project Website provides background information, the latest builds, breaking news, full development documentation, and access to a rich Yocto Project Development Community into which you can tap.
Video Seminar: The Introduction to the Yocto Project and BitBake, Part 1 and Introduction to the Yocto Project and BitBake, Part 2 videos offer a video seminar introducing you to the most important aspects of developing a custom embedded Linux distribution with the Yocto Project.
Yocto Project Overview and Concepts Manual: The Yocto Project Overview and Concepts Manual is a great place to start to learn about the Yocto Project. This manual introduces you to the Yocto Project and its development environment. The manual also provides conceptual information for various aspects of the Yocto Project.
Yocto Project Wiki: The Yocto Project Wiki provides additional information on where to go next when ramping up with the Yocto Project, release information, project planning, and QA information.
Yocto Project Mailing Lists: Related mailing lists provide a forum for discussion, patch submission and announcements. There are several mailing lists grouped by topic. See the Mailing lists section in the Yocto Project Reference Manual for a complete list of Yocto Project mailing lists.
Comprehensive List of Links and Other Documentation: The Links and Related Documentation section in the Yocto Project Reference Manual provides a comprehensive list of all related links and other user documentation.
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at docs@lists.yoctoproject.org
or
log into the Libera Chat #yocto
channel.
What I wish I’d known about Yocto Project
Note
Before reading further, make sure you’ve taken a look at the Software Overview page which presents the definitions for many of the terms referenced here. Also, know that some of the information here won’t make sense now, but as you start developing, it is the information you’ll want to keep close at hand. These are best known methods for working with Yocto Project and they are updated regularly.
Using the Yocto Project is fairly easy, until something goes wrong. Without an understanding of how the build process works, you’ll find yourself trying to troubleshoot “a black box”. Here are a few items that new users wished they had known before embarking on their first build with Yocto Project. Feel free to contact us with other suggestions.
Use Git, not the tarball download: If you use git the software will be automatically updated with bug updates because of how git works. If you download the tarball instead, you will need to be responsible for your own updates.
Get to know the layer index: All layers can be found in the layer index. Layers which have applied for Yocto Project Compatible status (structure continuity assurance and testing) can be found in the Yocto Project Compatible Layers page. Generally check the Compatible layer index first, and if you don’t find the necessary layer check the general layer index. The layer index is an original artifact from the Open Embedded Project. As such, that index doesn’t have the curating and testing that the Yocto Project provides on Yocto Project Compatible layer list, but the latter has fewer entries. Know that when you start searching in the layer index that not all layers have the same level of maturity, validation, or usability. Nor do searches prioritize displayed results. There is no easy way to help you through the process of choosing the best layer to suit your needs. Consequently, it is often trial and error, checking the mailing lists, or working with other developers through collaboration rooms that can help you make good choices.
Use existing BSP layers from silicon vendors when possible: Intel, TI, NXP and others have information on what BSP layers to use with their silicon. These layers have names such as “meta-intel” or “meta-ti”. Try not to build layers from scratch. If you do have custom silicon, use one of these layers as a guide or template and familiarize yourself with the Yocto Project Board Support Package Developer’s Guide.
Do not put everything into one layer: Use different layers to logically separate information in your build. As an example, you could have a BSP layer, a GUI layer, a distro configuration, middleware, or an application (e.g. “meta-filesystems”, “meta-python”, “meta-intel”, and so forth). Putting your entire build into one layer limits and complicates future customization and reuse. Isolating information into layers, on the other hand, helps keep simplify future customizations and reuse.
Never modify the POKY layer. Never. Ever. When you update to the next release, you’ll lose all of your work. ALL OF IT.
Don’t be fooled by documentation searching results: Yocto Project documentation is always being updated. Unfortunately, when you use Google to search for Yocto Project concepts or terms, Google consistently searches and retrieves older versions of Yocto Project manuals. For example, searching for a particular topic using Google could result in a “hit” on a Yocto Project manual that is several releases old. To be sure that you are using the most current Yocto Project documentation, use the drop-down menu at the top of any of its page.
Many developers look through the All-in-one ‘Mega’ Manual for a concept or term by doing a search through the whole page. This manual is a concatenation of the core set of Yocto Project manual. Thus, a simple string search using Ctrl-F in this manual produces all the “hits” for a desired term or concept. Once you find the area in which you are interested, you can display the actual manual, if desired. It is also possible to use the search bar in the menu or in the left navigation pane.
Understand the basic concepts of how the build system works: the workflow: Understanding the Yocto Project workflow is important as it can help you both pinpoint where trouble is occurring and how the build is breaking. The workflow breaks down into the following steps:
Fetch – get the source code
Extract – unpack the sources
Patch – apply patches for bug fixes and new capability
Configure – set up your environment specifications
Build – compile and link
Install – copy files to target directories
Package – bundle files for installation
During “fetch”, there may be an inability to find code. During “extract”, there is likely an invalid zip or something similar. In other words, the function of a particular part of the workflow gives you an idea of what might be going wrong.
Know that you can generate a dependency graph and learn how to do it: A dependency graph shows dependencies between recipes, tasks, and targets. You can use the “-g” option with BitBake to generate this graph. When you start a build and the build breaks, you could see packages you have no clue about or have any idea why the build system has included them. The dependency graph can clarify that confusion. You can learn more about dependency graphs and how to generate them in the Generating Dependency Graphs section in the BitBake User Manual.
Here’s how you decode “magic” folder names in tmp/work: The build system fetches, unpacks, preprocesses, and builds. If something goes wrong, the build system reports to you directly the path to a folder where the temporary (build/tmp) files and packages reside resulting from the build. For a detailed example of this process, see the example. Unfortunately this example is on an earlier release of Yocto Project.
When you perform a build, you can use the “-u” BitBake command-line option to specify a user interface viewer into the dependency graph (e.g. knotty, ncurses, or taskexp) that helps you understand the build dependencies better.
You can build more than just images: You can build and run a specific task for a specific package (including devshell) or even a single recipe. When developers first start using the Yocto Project, the instructions found in the Yocto Project Quick Build show how to create an image and then run or flash that image. However, you can actually build just a single recipe. Thus, if some dependency or recipe isn’t working, you can just say “bitbake foo” where “foo” is the name for a specific recipe. As you become more advanced using the Yocto Project, and if builds are failing, it can be useful to make sure the fetch itself works as desired. Here are some valuable links: Using a Development Shell for information on how to build and run a specific task using devshell. Also, the SDK manual shows how to build out a specific recipe.
An ambiguous definition: Package vs Recipe: A recipe contains instructions the build system uses to create packages. Recipes and Packages are the difference between the front end and the result of the build process.
As mentioned, the build system takes the recipe and creates packages from the recipe’s instructions. The resulting packages are related to the one thing the recipe is building but are different parts (packages) of the build (i.e. the main package, the doc package, the debug symbols package, the separate utilities package, and so forth). The build system splits out the packages so that you don’t need to install the packages you don’t want or need, which is advantageous because you are building for small devices when developing for embedded and IoT.
You will want to learn about and know what’s packaged in the root filesystem.
Create your own image recipe: There are a number of ways to create your own image recipe. We suggest you create your own image recipe as opposed to appending an existing recipe. It is trivial and easy to write an image recipe. Again, do not try appending to an existing image recipe. Create your own and do it right from the start.
Finally, here is a list of the basic skills you will need as a systems developer. You must be able to:
deal with corporate proxies
add a package to an image
understand the difference between a recipe and package
build a package by itself and why that’s useful
find out what packages are created by a recipe
find out what files are in a package
find out what files are in an image
add an ssh server to an image (enable transferring of files to target)
know the anatomy of a recipe
know how to create and use layers
find recipes (with the OpenEmbedded Layer index)
understand difference between machine and distro settings
find and use the right BSP (machine) for your hardware
find examples of distro features and know where to set them
understanding the task pipeline and executing individual tasks
understand devtool and how it simplifies your workflow
improve build speeds with shared downloads and shared state cache
generate and understand a dependency graph
generate and understand BitBake environment
build an Extensible SDK for applications development
Depending on what you primary interests are with the Yocto Project, you could consider any of the following reading:
Look Through the Yocto Project Development Tasks Manual: This manual contains procedural information grouped to help you get set up, work with layers, customize images, write new recipes, work with libraries, and use QEMU. The information is task-based and spans the breadth of the Yocto Project. See the Yocto Project Development Tasks Manual.
Look Through the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual: This manual describes how to use both the standard SDK and the extensible SDK, which are used primarily for application development. The Using the Extensible SDK also provides example workflows that use devtool. See the section Using devtool in Your SDK Workflow for more information.
Learn About Kernel Development: If you want to see how to work with the kernel and understand Yocto Linux kernels, see the Yocto Project Linux Kernel Development Manual. This manual provides information on how to patch the kernel, modify kernel recipes, and configure the kernel.
Learn About Board Support Packages (BSPs): If you want to learn about BSPs, see the Yocto Project Board Support Package Developer’s Guide. This manual also provides an example BSP creation workflow. See the Board Support Packages (BSP) — Developer’s Guide section.
Learn About Toaster: Toaster is a web interface to the Yocto Project’s OpenEmbedded build system. If you are interested in using this type of interface to create images, see the Toaster User Manual.
Discover the VSCode extension: The Yocto Project BitBake extension for the Visual Studio Code IDE provides language features and commands for working with the Yocto Project. If you are interested in using this extension, visit its marketplace page.
Have Available the Yocto Project Reference Manual: Unlike the rest of the Yocto Project manual set, this manual is comprised of material suited for reference rather than procedures. You can get build details, a closer look at how the pieces of the Yocto Project development environment work together, information on various technical details, guidance on migrating to a newer Yocto Project release, reference material on the directory structure, classes, and tasks. The Yocto Project Reference Manual also contains a fairly comprehensive glossary of variables used within the Yocto Project.
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at docs@lists.yoctoproject.org
or
log into the Libera Chat #yocto
channel.
Transitioning to a custom environment for systems development
Note
So you’ve finished the Yocto Project Quick Build and glanced over the document What I wish I’d known about Yocto Project, the latter contains important information learned from other users. You’re well prepared. But now, as you are starting your own project, it isn’t exactly straightforward what to do. And, the documentation is daunting. We’ve put together a few hints to get you started.
Make a list of the processor, target board, technologies, and capabilities that will be part of your project. You will be finding layers with recipes and other metadata that support these things, and adding them to your configuration. (See #3)
Set up your board support. Even if you’re using custom hardware, it might be easier to start with an existing target board that uses the same processor or at least the same architecture as your custom hardware. Knowing the board already has a functioning Board Support Package (BSP) within the project makes it easier for you to get comfortable with project concepts.
Find and acquire the best BSP for your target. Use the Yocto Project Compatible Layers or even the OpenEmbedded Layer Index to find and acquire the best BSP for your target board. The Yocto Project layer index BSPs are regularly validated. The best place to get your first BSP is from your silicon manufacturer or board vendor – they can point you to their most qualified efforts. In general, for Intel silicon use meta-intel, for Texas Instruments use meta-ti, and so forth. Choose a BSP that has been tested with the same Yocto Project release that you’ve downloaded. Be aware that some BSPs may not be immediately supported on the very latest release, but they will be eventually.
You might want to start with the build specification that Poky provides (which is reference embedded distribution) and then add your newly chosen layers to that. Here is the information about adding layers.
Based on the layers you’ve chosen, make needed changes in your configuration. For instance, you’ve chosen a machine type and added in the corresponding BSP layer. You’ll then need to change the value of the MACHINE variable in your configuration file (build/local.conf) to point to that same machine type. There could be other layer-specific settings you need to change as well. Each layer has a
README
document that you can look at for this type of usage information.Add a new layer for any custom recipes and metadata you create. Use the
bitbake-layers create-layer
tool for Yocto Project 2.4+ releases. If you are using a Yocto Project release earlier than 2.4, use theyocto-layer create
tool. Thebitbake-layers
tool also provides a number of other useful layer-related commands. See Creating a General Layer Using the bitbake-layers Script section.Create your own layer for the BSP you’re going to use. It is not common that you would need to create an entire BSP from scratch unless you have a really special device. Even if you are using an existing BSP, create your own layer for the BSP. For example, given a 64-bit x86-based machine, copy the conf/intel-corei7-64 definition and give the machine a relevant name (think board name, not product name). Make sure the layer configuration is dependent on the meta-intel layer (or at least, meta-intel remains in your bblayers.conf). Now you can put your custom BSP settings into your layer and you can re-use it for different applications.
Write your own recipe to build additional software support that isn’t already available in the form of a recipe. Creating your own recipe is especially important for custom application software that you want to run on your device. Writing new recipes is a process of refinement. Start by getting each step of the build process working beginning with fetching all the way through packaging. Next, run the software on your target and refine further as needed. See Writing a New Recipe in the Yocto Project Development Tasks Manual for more information.
Now you’re ready to create an image recipe. There are a number of ways to do this. However, it is strongly recommended that you have your own image recipe — don’t try appending to existing image recipes. Recipes for images are trivial to create and you usually want to fully customize their contents.
Build your image and refine it. Add what’s missing and fix anything that’s broken using your knowledge of the workflow to identify where issues might be occurring.
Consider creating your own distribution. When you get to a certain level of customization, consider creating your own distribution rather than using the default reference distribution.
Distribution settings define the packaging back-end (e.g. rpm or other) as well as the package feed and possibly the update solution. You would create your own distribution in a new layer inheriting from Poky but overriding what needs to change for your distribution. If you find yourself adding a lot of configuration to your local.conf file aside from paths and other typical local settings, it’s time to consider creating your own distribution.
You can add product specifications that can customize the distribution if needed in other layers. You can also add other functionality specific to the product. But to update the distribution, not individual products, you update the distribution feature through that layer.
Congratulations! You’re well on your way. Welcome to the Yocto Project community.
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at docs@lists.yoctoproject.org
or
log into the Libera Chat #yocto
channel.
Yocto Project Overview and Concepts Manual
1 The Yocto Project Overview and Concepts Manual
1.1 Welcome
Welcome to the Yocto Project Overview and Concepts Manual! This manual introduces the Yocto Project by providing concepts, software overviews, best-known-methods (BKMs), and any other high-level introductory information suitable for a new Yocto Project user.
Here is what you can get from this manual:
Introducing the Yocto Project: This chapter provides an introduction to the Yocto Project. You will learn about features and challenges of the Yocto Project, the layer model, components and tools, development methods, the Poky reference distribution, the OpenEmbedded build system workflow, and some basic Yocto terms.
The Yocto Project Development Environment: This chapter helps you get started understanding the Yocto Project development environment. You will learn about open source, development hosts, Yocto Project source repositories, workflows using Git and the Yocto Project, a Git primer, and information about licensing.
Yocto Project Concepts : This chapter presents various concepts regarding the Yocto Project. You can find conceptual information about components, development, cross-toolchains, and so forth.
This manual does not give you the following:
Step-by-step Instructions for Development Tasks: Instructional procedures reside in other manuals within the Yocto Project documentation set. For example, the Yocto Project Development Tasks Manual provides examples on how to perform various development tasks. As another example, the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual contains detailed instructions on how to install an SDK, which is used to develop applications for target hardware.
Reference Material: This type of material resides in an appropriate reference manual. For example, system variables are documented in the Yocto Project Reference Manual. As another example, the Yocto Project Board Support Package Developer’s Guide contains reference information on BSPs.
Detailed Public Information Not Specific to the Yocto Project: For example, exhaustive information on how to use the Source Control Manager Git is better covered with Internet searches and official Git Documentation than through the Yocto Project documentation.
1.2 Other Information
Because this manual presents information for many different topics, supplemental information is recommended for full comprehension. For additional introductory information on the Yocto Project, see the Yocto Project Website. If you want to build an image with no knowledge of Yocto Project as a way of quickly testing it out, see the Yocto Project Quick Build document. For a comprehensive list of links and other documentation, see the “Links and Related Documentation” section in the Yocto Project Reference Manual.
2 Introducing the Yocto Project
2.1 What is the Yocto Project?
The Yocto Project is an open source collaboration project that helps developers create custom Linux-based systems that are designed for embedded products regardless of the product’s hardware architecture. Yocto Project provides a flexible toolset and a development environment that allows embedded device developers across the world to collaborate through shared technologies, software stacks, configurations, and best practices used to create these tailored Linux images.
Thousands of developers worldwide have discovered that Yocto Project provides advantages in both systems and applications development, archival and management benefits, and customizations used for speed, footprint, and memory utilization. The project is a standard when it comes to delivering embedded software stacks. The project allows software customizations and build interchange for multiple hardware platforms as well as software stacks that can be maintained and scaled.
For further introductory information on the Yocto Project, you might be interested in this article by Drew Moseley and in this short introductory video.
The remainder of this section overviews advantages and challenges tied to the Yocto Project.
2.1.1 Features
Here are features and advantages of the Yocto Project:
Widely Adopted Across the Industry: Many semiconductor, operating system, software, and service vendors adopt and support the Yocto Project in their products and services. For a look at the Yocto Project community and the companies involved with the Yocto Project, see the “COMMUNITY” and “ECOSYSTEM” tabs on the Yocto Project home page.
Architecture Agnostic: Yocto Project supports Intel, ARM, MIPS, AMD, PPC and other architectures. Most ODMs, OSVs, and chip vendors create and supply BSPs that support their hardware. If you have custom silicon, you can create a BSP that supports that architecture.
Aside from broad architecture support, the Yocto Project fully supports a wide range of devices emulated by the Quick EMUlator (QEMU).
Images and Code Transfer Easily: Yocto Project output can easily move between architectures without moving to new development environments. Additionally, if you have used the Yocto Project to create an image or application and you find yourself not able to support it, commercial Linux vendors such as Wind River, Mentor Graphics, Timesys, and ENEA could take it and provide ongoing support. These vendors have offerings that are built using the Yocto Project.
Flexibility: Corporations use the Yocto Project many different ways. One example is to create an internal Linux distribution as a code base the corporation can use across multiple product groups. Through customization and layering, a project group can leverage the base Linux distribution to create a distribution that works for their product needs.
Ideal for Constrained Embedded and IoT devices: Unlike a full Linux distribution, you can use the Yocto Project to create exactly what you need for embedded devices. You only add the feature support or packages that you absolutely need for the device. For devices that have display hardware, you can use available system components such as X11, Wayland, GTK+, Qt, Clutter, and SDL (among others) to create a rich user experience. For devices that do not have a display or where you want to use alternative UI frameworks, you can choose to not build these components.
Comprehensive Toolchain Capabilities: Toolchains for supported architectures satisfy most use cases. However, if your hardware supports features that are not part of a standard toolchain, you can easily customize that toolchain through specification of platform-specific tuning parameters. And, should you need to use a third-party toolchain, mechanisms built into the Yocto Project allow for that.
Mechanism Rules Over Policy: Focusing on mechanism rather than policy ensures that you are free to set policies based on the needs of your design instead of adopting decisions enforced by some system software provider.
Uses a Layer Model: The Yocto Project layer infrastructure groups related functionality into separate bundles. You can incrementally add these grouped functionalities to your project as needed. Using layers to isolate and group functionality reduces project complexity and redundancy, allows you to easily extend the system, make customizations, and keep functionality organized.
Supports Partial Builds: You can build and rebuild individual packages as needed. Yocto Project accomplishes this through its Shared State Cache (sstate) scheme. Being able to build and debug components individually eases project development.
Releases According to a Strict Schedule: Major releases occur on a six-month cycle predictably in October and April. The most recent two releases support point releases to address common vulnerabilities and exposures. This predictability is crucial for projects based on the Yocto Project and allows development teams to plan activities.
Rich Ecosystem of Individuals and Organizations: For open source projects, the value of community is very important. Support forums, expertise, and active developers who continue to push the Yocto Project forward are readily available.
Binary Reproducibility: The Yocto Project allows you to be very specific about dependencies and achieves very high percentages of binary reproducibility (e.g. 99.8% for
core-image-minimal
). When distributions are not specific about which packages are pulled in and in what order to support dependencies, other build systems can arbitrarily include packages.License Manifest: The Yocto Project provides a license manifest for review by people who need to track the use of open source licenses (e.g. legal teams).
2.1.2 Challenges
Here are challenges you might encounter when developing using the Yocto Project:
Steep Learning Curve: The Yocto Project has a steep learning curve and has many different ways to accomplish similar tasks. It can be difficult to choose between such ways.
Understanding What Changes You Need to Make For Your Design Requires Some Research: Beyond the simple tutorial stage, understanding what changes need to be made for your particular design can require a significant amount of research and investigation. For information that helps you transition from trying out the Yocto Project to using it for your project, see the “What I wish I’d known about Yocto Project” and “Transitioning to a custom environment for systems development” documents on the Yocto Project website.
Project Workflow Could Be Confusing: The Yocto Project workflow could be confusing if you are used to traditional desktop and server software development. In a desktop development environment, there are mechanisms to easily pull and install new packages, which are typically pre-compiled binaries from servers accessible over the Internet. Using the Yocto Project, you must modify your configuration and rebuild to add additional packages.
Working in a Cross-Build Environment Can Feel Unfamiliar: When developing code to run on a target, compilation, execution, and testing done on the actual target can be faster than running a BitBake build on a development host and then deploying binaries to the target for test. While the Yocto Project does support development tools on the target, the additional step of integrating your changes back into the Yocto Project build environment would be required. Yocto Project supports an intermediate approach that involves making changes on the development system within the BitBake environment and then deploying only the updated packages to the target.
The Yocto Project OpenEmbedded Build System produces packages in standard formats (i.e. RPM, DEB, IPK, and TAR). You can deploy these packages into the running system on the target by using utilities on the target such as
rpm
oripk
.Initial Build Times Can be Significant: Long initial build times are unfortunately unavoidable due to the large number of packages initially built from scratch for a fully functioning Linux system. Once that initial build is completed, however, the shared-state (sstate) cache mechanism Yocto Project uses keeps the system from rebuilding packages that have not been “touched” since the last build. The sstate mechanism significantly reduces times for successive builds.
2.2 The Yocto Project Layer Model
The Yocto Project’s “Layer Model” is a development model for embedded and IoT Linux creation that distinguishes the Yocto Project from other simple build systems. The Layer Model simultaneously supports collaboration and customization. Layers are repositories that contain related sets of instructions that tell the OpenEmbedded Build System what to do. You can collaborate, share, and reuse layers.
Layers can contain changes to previous instructions or settings at any time. This powerful override capability is what allows you to customize previously supplied collaborative or community layers to suit your product requirements.
You use different layers to logically separate information in your build. As an example, you could have BSP, GUI, distro configuration, middleware, or application layers. Putting your entire build into one layer limits and complicates future customization and reuse. Isolating information into layers, on the other hand, helps simplify future customizations and reuse. You might find it tempting to keep everything in one layer when working on a single project. However, the more modular your Metadata, the easier it is to cope with future changes.
Note
Use Board Support Package (BSP) layers from silicon vendors when possible.
Familiarize yourself with the Yocto Project Compatible Layers or the OpenEmbedded Layer Index. The latter contains more layers but they are less universally validated.
Layers support the inclusion of technologies, hardware components, and software components. The Yocto Project Compatible designation provides a minimum level of standardization that contributes to a strong ecosystem. “YP Compatible” is applied to appropriate products and software components such as BSPs, other OE-compatible layers, and related open-source projects, allowing the producer to use Yocto Project badges and branding assets.
To illustrate how layers are used to keep things modular, consider
machine customizations. These types of customizations typically reside
in a special layer, rather than a general layer, called a BSP Layer.
Furthermore, the machine customizations should be isolated from recipes
and Metadata that support a new GUI environment, for example. This
situation gives you a couple of layers: one for the machine
configurations, and one for the GUI environment. It is important to
understand, however, that the BSP layer can still make machine-specific
additions to recipes within the GUI environment layer without polluting
the GUI layer itself with those machine-specific changes. You can
accomplish this through a recipe that is a BitBake append
(.bbappend
) file, which is described later in this section.
Note
For general information on BSP layer structure, see the Yocto Project Board Support Package Developer’s Guide.
The Source Directory
contains both general layers and BSP layers right out of the box. You
can easily identify layers that ship with a Yocto Project release in the
Source Directory by their names. Layers typically have names that begin
with the string meta-
.
Note
It is not a requirement that a layer name begin with the prefix
meta-
, but it is a commonly accepted standard in the Yocto Project
community.
For example, if you were to examine the tree view
of the poky
repository, you will see several layers: meta
,
meta-skeleton
, meta-selftest
, meta-poky
, and
meta-yocto-bsp
. Each of these repositories represents a distinct
layer.
For procedures on how to create layers, see the “Understanding and Creating Layers” section in the Yocto Project Development Tasks Manual.
2.3 Components and Tools
The Yocto Project employs a collection of components and tools used by the project itself, by project developers, and by those using the Yocto Project. These components and tools are open source projects and metadata that are separate from the reference distribution (Poky) and the OpenEmbedded Build System. Most of the components and tools are downloaded separately.
This section provides brief overviews of the components and tools associated with the Yocto Project.
2.3.1 Development Tools
Here are tools that help you develop images and applications using the Yocto Project:
CROPS: CROPS is an open source, cross-platform development framework that leverages Docker Containers. CROPS provides an easily managed, extensible environment that allows you to build binaries for a variety of architectures on Windows, Linux and Mac OS X hosts.
devtool: This command-line tool is available as part of the extensible SDK (eSDK) and is its cornerstone. You can use
devtool
to help build, test, and package software within the eSDK. You can use the tool to optionally integrate what you build into an image built by the OpenEmbedded build system.The
devtool
command employs a number of sub-commands that allow you to add, modify, and upgrade recipes. As with the OpenEmbedded build system, “recipes” represent software packages withindevtool
. When you usedevtool add
, a recipe is automatically created. When you usedevtool modify
, the specified existing recipe is used in order to determine where to get the source code and how to patch it. In both cases, an environment is set up so that when you build the recipe a source tree that is under your control is used in order to allow you to make changes to the source as desired. By default, both new recipes and the source go into a “workspace” directory under the eSDK. Thedevtool upgrade
command updates an existing recipe so that you can build it for an updated set of source files.You can read about the
devtool
workflow in the Yocto Project Application Development and Extensible Software Development Kit (eSDK) Manual in the “Using devtool in Your SDK Workflow” section.Extensible Software Development Kit (eSDK): The eSDK provides a cross-development toolchain and libraries tailored to the contents of a specific image. The eSDK makes it easy to add new applications and libraries to an image, modify the source for an existing component, test changes on the target hardware, and integrate into the rest of the OpenEmbedded build system. The eSDK gives you a toolchain experience supplemented with the powerful set of
devtool
commands tailored for the Yocto Project environment.For information on the eSDK, see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) Manual.
Toaster: Toaster is a web interface to the Yocto Project OpenEmbedded build system. Toaster allows you to configure, run, and view information about builds. For information on Toaster, see the Toaster User Manual.
VSCode IDE Extension: The Yocto Project BitBake extension for Visual Studio Code provides a rich set of features for working with BitBake recipes. The extension provides syntax highlighting, hover tips, and completion for BitBake files as well as embedded Python and Bash languages. Additional views and commands allow you to efficiently browse, build and edit recipes. It also provides SDK integration for cross-compiling and debugging through
devtool
.Learn more about the VSCode Extension on the extension’s frontpage.
2.3.2 Production Tools
Here are tools that help with production related activities using the Yocto Project:
Auto Upgrade Helper: This utility when used in conjunction with the OpenEmbedded Build System (BitBake and OE-Core) automatically generates upgrades for recipes that are based on new versions of the recipes published upstream. See Using the Auto Upgrade Helper (AUH) for how to set it up.
Recipe Reporting System: The Recipe Reporting System tracks recipe versions available for Yocto Project. The main purpose of the system is to help you manage the recipes you maintain and to offer a dynamic overview of the project. The Recipe Reporting System is built on top of the OpenEmbedded Layer Index, which is a website that indexes OpenEmbedded-Core layers.
Patchwork: Patchwork is a fork of a project originally started by OzLabs. The project is a web-based tracking system designed to streamline the process of bringing contributions into a project. The Yocto Project uses Patchwork as an organizational tool to handle patches, which number in the thousands for every release.
AutoBuilder: AutoBuilder is a project that automates build tests and quality assurance (QA). By using the public AutoBuilder, anyone can determine the status of the current development branch of Poky.
Note
AutoBuilder is based on buildbot.
A goal of the Yocto Project is to lead the open source industry with a project that automates testing and QA procedures. In doing so, the project encourages a development community that publishes QA and test plans, publicly demonstrates QA and test plans, and encourages development of tools that automate and test and QA procedures for the benefit of the development community.
You can learn more about the AutoBuilder used by the Yocto Project Autobuilder here.
Pseudo: Pseudo is the Yocto Project implementation of fakeroot, which is used to run commands in an environment that seemingly has root privileges.
During a build, it can be necessary to perform operations that require system administrator privileges. For example, file ownership or permissions might need to be defined. Pseudo is a tool that you can either use directly or through the environment variable
LD_PRELOAD
. Either method allows these operations to succeed even without system administrator privileges.Thanks to Pseudo, the Yocto Project never needs root privileges to build images for your target system.
You can read more about Pseudo in the “Fakeroot and Pseudo” section.
2.3.3 Open-Embedded Build System Components
Here are components associated with the OpenEmbedded Build System:
BitBake: BitBake is a core component of the Yocto Project and is used by the OpenEmbedded build system to build images. While BitBake is key to the build system, BitBake is maintained separately from the Yocto Project.
BitBake is a generic task execution engine that allows shell and Python tasks to be run efficiently and in parallel while working within complex inter-task dependency constraints. In short, BitBake is a build engine that works through recipes written in a specific format in order to perform sets of tasks.
You can learn more about BitBake in the BitBake User Manual.
OpenEmbedded-Core: OpenEmbedded-Core (OE-Core) is a common layer of metadata (i.e. recipes, classes, and associated files) used by OpenEmbedded-derived systems, which includes the Yocto Project. The Yocto Project and the OpenEmbedded Project both maintain the OpenEmbedded-Core. You can find the OE-Core metadata in the Yocto Project Source Repositories.
Historically, the Yocto Project integrated the OE-Core metadata throughout the Yocto Project source repository reference system (Poky). After Yocto Project Version 1.0, the Yocto Project and OpenEmbedded agreed to work together and share a common core set of metadata (OE-Core), which contained much of the functionality previously found in Poky. This collaboration achieved a long-standing OpenEmbedded objective for having a more tightly controlled and quality-assured core. The results also fit well with the Yocto Project objective of achieving a smaller number of fully featured tools as compared to many different ones.
Sharing a core set of metadata results in Poky as an integration layer on top of OE-Core. You can see that in this figure. The Yocto Project combines various components such as BitBake, OE-Core, script “glue”, and documentation for its build system.
2.3.4 Reference Distribution (Poky)
Poky is the Yocto Project reference distribution. It contains the OpenEmbedded Build System (BitBake and OE-Core) as well as a set of metadata to get you started building your own distribution. See the figure in “What is the Yocto Project?” section for an illustration that shows Poky and its relationship with other parts of the Yocto Project.
To use the Yocto Project tools and components, you can download
(clone
) Poky and use it to bootstrap your own distribution.
Note
Poky does not contain binary files. It is a working example of how to build your own custom Linux distribution from source.
You can read more about Poky in the “Reference Embedded Distribution (Poky)” section.
2.3.5 Packages for Finished Targets
Here are components associated with packages for finished targets:
Matchbox: Matchbox is an Open Source, base environment for the X Window System running on non-desktop, embedded platforms such as handhelds, set-top boxes, kiosks, and anything else for which screen space, input mechanisms, or system resources are limited.
Matchbox consists of a number of interchangeable and optional applications that you can tailor to a specific, non-desktop platform to enhance usability in constrained environments.
You can find the Matchbox source in the Yocto Project Source Repositories.
Opkg: Open PacKaGe management (opkg) is a lightweight package management system based on the itsy package (ipkg) management system. Opkg is written in C and resembles Advanced Package Tool (APT) and Debian Package (dpkg) in operation.
Opkg is intended for use on embedded Linux devices and is used in this capacity in the OpenEmbedded and OpenWrt projects, as well as the Yocto Project.
Note
As best it can, opkg maintains backwards compatibility with ipkg and conforms to a subset of Debian’s policy manual regarding control files.
You can find the opkg source in the Yocto Project Source Repositories.
2.3.6 Archived Components
The Build Appliance is a virtual machine image that enables you to build and boot a custom embedded Linux image with the Yocto Project using a non-Linux development system.
Historically, the Build Appliance was the second of three methods by which you could use the Yocto Project on a system that was not native to Linux.
Hob: Hob, which is now deprecated and is no longer available since the 2.1 release of the Yocto Project provided a rudimentary, GUI-based interface to the Yocto Project. Toaster has fully replaced Hob.
Build Appliance: Post Hob, the Build Appliance became available. It was never recommended that you use the Build Appliance as a day-to-day production development environment with the Yocto Project. Build Appliance was useful as a way to try out development in the Yocto Project environment.
CROPS: The final and best solution available now for developing using the Yocto Project on a system not native to Linux is with CROPS.
2.4 Development Methods
The Yocto Project development environment usually involves a Build Host and target hardware. You use the Build Host to build images and develop applications, while you use the target hardware to execute deployed software.
This section provides an introduction to the choices or development methods you have when setting up your Build Host. Depending on your particular workflow preference and the type of operating system your Build Host runs, you have several choices.
Note
For additional detail about the Yocto Project development environment, see the “The Yocto Project Development Environment” chapter.
Native Linux Host: By far the best option for a Build Host. A system running Linux as its native operating system allows you to develop software by directly using the BitBake tool. You can accomplish all aspects of development from a regular shell in a supported Linux distribution.
For information on how to set up a Build Host on a system running Linux as its native operating system, see the “Setting Up a Native Linux Host” section in the Yocto Project Development Tasks Manual.
CROss PlatformS (CROPS): Typically, you use CROPS, which leverages Docker Containers, to set up a Build Host that is not running Linux (e.g. Microsoft Windows or macOS).
Note
You can, however, use CROPS on a Linux-based system.
CROPS is an open source, cross-platform development framework that provides an easily managed, extensible environment for building binaries targeted for a variety of architectures on Windows, macOS, or Linux hosts. Once the Build Host is set up using CROPS, you can prepare a shell environment to mimic that of a shell being used on a system natively running Linux.
For information on how to set up a Build Host with CROPS, see the “Setting Up to Use CROss PlatformS (CROPS)” section in the Yocto Project Development Tasks Manual.
Windows Subsystem For Linux (WSL 2): You may use Windows Subsystem For Linux version 2 to set up a Build Host using Windows 10 or later, or Windows Server 2019 or later.
The Windows Subsystem For Linux allows Windows to run a real Linux kernel inside of a lightweight virtual machine (VM).
For information on how to set up a Build Host with WSL 2, see the “Setting Up to Use Windows Subsystem For Linux (WSL 2)” section in the Yocto Project Development Tasks Manual.
Toaster: Regardless of what your Build Host is running, you can use Toaster to develop software using the Yocto Project. Toaster is a web interface to the Yocto Project’s OpenEmbedded Build System. The interface allows you to configure and run your builds. Information about builds is collected and stored in a database. You can use Toaster to configure and start builds on multiple remote build servers.
For information about and how to use Toaster, see the Toaster User Manual.
Using the VSCode Extension: You can use the Yocto Project BitBake extension for Visual Studio Code to start your BitBake builds through a graphical user interface.
Learn more about the VSCode Extension on the extension’s marketplace page
2.5 Reference Embedded Distribution (Poky)
“Poky”, which is pronounced Pock-ee, is the name of the Yocto Project’s reference distribution or Reference OS Kit. Poky contains the OpenEmbedded Build System (BitBake and OpenEmbedded-Core (OE-Core)) as well as a set of Metadata to get you started building your own distro. In other words, Poky is a base specification of the functionality needed for a typical embedded system as well as the components from the Yocto Project that allow you to build a distribution into a usable binary image.
Poky is a combined repository of BitBake, OpenEmbedded-Core (which is
found in meta
), meta-poky
, meta-yocto-bsp
, and documentation
provided all together and known to work well together. You can view
these items that make up the Poky repository in the
Source Repositories.
Note
If you are interested in all the contents of the poky Git repository, see the “Top-Level Core Components” section in the Yocto Project Reference Manual.
The following figure illustrates what generally comprises Poky:
BitBake is a task executor and scheduler that is the heart of the OpenEmbedded build system.
meta-poky
, which is Poky-specific metadata.meta-yocto-bsp
, which are Yocto Project-specific Board Support Packages (BSPs).OpenEmbedded-Core (OE-Core) metadata, which includes shared configurations, global variable definitions, shared classes, packaging, and recipes. Classes define the encapsulation and inheritance of build logic. Recipes are the logical units of software and images to be built.
Documentation, which contains the Yocto Project source files used to make the set of user manuals.
Note
While Poky is a “complete” distribution specification and is tested and put through QA, you cannot use it as a product “out of the box” in its current form.
To use the Yocto Project tools, you can use Git to clone (download) the Poky repository then use your local copy of the reference distribution to bootstrap your own distribution.
Note
Poky does not contain binary files. It is a working example of how to build your own custom Linux distribution from source.
Poky has a regular, well established, six-month release cycle under its own version. Major releases occur at the same time major releases (point releases) occur for the Yocto Project, which are typically in the Spring and Fall. For more information on the Yocto Project release schedule and cadence, see the “Yocto Project Releases and the Stable Release Process” chapter in the Yocto Project Reference Manual.
Much has been said about Poky being a “default configuration”. A default configuration provides a starting image footprint. You can use Poky out of the box to create an image ranging from a shell-accessible minimal image all the way up to a Linux Standard Base-compliant image that uses a GNOME Mobile and Embedded (GMAE) based reference user interface called Sato.
One of the most powerful properties of Poky is that every aspect of a build is controlled by the metadata. You can use metadata to augment these base image types by adding metadata layers that extend functionality. These layers can provide, for example, an additional software stack for an image type, add a board support package (BSP) for additional hardware, or even create a new image type.
Metadata is loosely grouped into configuration files or package recipes.
A recipe is a collection of non-executable metadata used by BitBake to
set variables or define additional build-time tasks. A recipe contains
fields such as the recipe description, the recipe version, the license
of the package and the upstream source repository. A recipe might also
indicate that the build process uses autotools, make, distutils or any
other build process, in which case the basic functionality can be
defined by the classes it inherits from the OE-Core layer’s class
definitions in ./meta/classes
. Within a recipe you can also define
additional tasks as well as task prerequisites. Recipe syntax through
BitBake also supports both :prepend
and :append
operators as a
method of extending task functionality. These operators inject code into
the beginning or end of a task. For information on these BitBake
operators, see the
“Appending and Prepending (Override Style Syntax)”
section in the BitBake User’s Manual.
2.6 The OpenEmbedded Build System Workflow
The OpenEmbedded Build System uses a “workflow” to accomplish image and SDK generation. The following figure overviews that workflow:
Here is a brief summary of the “workflow”:
Developers specify architecture, policies, patches and configuration details.
The build system fetches and downloads the source code from the specified location. The build system supports standard methods such as tarballs or source code repositories systems such as Git.
Once source code is downloaded, the build system extracts the sources into a local work area where patches are applied and common steps for configuring and compiling the software are run.
The build system then installs the software into a temporary staging area where the binary package format you select (DEB, RPM, or IPK) is used to roll up the software.
Different QA and sanity checks run throughout entire build process.
After the binaries are created, the build system generates a binary package feed that is used to create the final root file image.
The build system generates the file system image and a customized Extensible SDK (eSDK) for application development in parallel.
For a very detailed look at this workflow, see the “OpenEmbedded Build System Concepts” section.
2.7 Some Basic Terms
It helps to understand some basic fundamental terms when learning the Yocto Project. Although there is a list of terms in the “Yocto Project Terms” section of the Yocto Project Reference Manual, this section provides the definitions of some terms helpful for getting started:
Configuration Files: Files that hold global definitions of variables, user-defined variables, and hardware configuration information. These files tell the OpenEmbedded Build System what to build and what to put into the image to support a particular platform.
Extensible Software Development Kit (eSDK): A custom SDK for application developers. This eSDK allows developers to incorporate their library and programming changes back into the image to make their code available to other application developers. For information on the eSDK, see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
Layer: A collection of related recipes. Layers allow you to consolidate related metadata to customize your build. Layers also isolate information used when building for multiple architectures. Layers are hierarchical in their ability to override previous specifications. You can include any number of available layers from the Yocto Project and customize the build by adding your own layers after them. You can search the Layer Index for layers used within Yocto Project.
For more detailed information on layers, see the “Understanding and Creating Layers” section in the Yocto Project Development Tasks Manual. For a discussion specifically on BSP Layers, see the “BSP Layers” section in the Yocto Project Board Support Packages (BSP) Developer’s Guide.
Metadata: A key element of the Yocto Project is the Metadata that is used to construct a Linux distribution and is contained in the files that the OpenEmbedded build system parses when building an image. In general, Metadata includes recipes, configuration files, and other information that refers to the build instructions themselves, as well as the data used to control what things get built and the effects of the build. Metadata also includes commands and data used to indicate what versions of software are used, from where they are obtained, and changes or additions to the software itself (patches or auxiliary files) that are used to fix bugs or customize the software for use in a particular situation. OpenEmbedded-Core is an important set of validated metadata.
OpenEmbedded Build System: The terms “BitBake” and “build system” are sometimes used for the OpenEmbedded Build System.
BitBake is a task scheduler and execution engine that parses instructions (i.e. recipes) and configuration data. After a parsing phase, BitBake creates a dependency tree to order the compilation, schedules the compilation of the included code, and finally executes the building of the specified custom Linux image (distribution). BitBake is similar to the
make
tool.During a build process, the build system tracks dependencies and performs a native or cross-compilation of each package. As a first step in a cross-build setup, the framework attempts to create a cross-compiler toolchain (i.e. Extensible SDK) suited for the target platform.
OpenEmbedded-Core (OE-Core): OE-Core is metadata comprised of foundation recipes, classes, and associated files that are meant to be common among many different OpenEmbedded-derived systems, including the Yocto Project. OE-Core is a curated subset of an original repository developed by the OpenEmbedded community that has been pared down into a smaller, core set of continuously validated recipes. The result is a tightly controlled and quality-assured core set of recipes.
You can see the Metadata in the
meta
directory of the Yocto Project Source Repositories.Packages: In the context of the Yocto Project, this term refers to a recipe’s packaged output produced by BitBake (i.e. a “baked recipe”). A package is generally the compiled binaries produced from the recipe’s sources. You “bake” something by running it through BitBake.
It is worth noting that the term “package” can, in general, have subtle meanings. For example, the packages referred to in the “Required Packages for the Build Host” section in the Yocto Project Reference Manual are compiled binaries that, when installed, add functionality to your host Linux distribution.
Another point worth noting is that historically within the Yocto Project, recipes were referred to as packages — thus, the existence of several BitBake variables that are seemingly mis-named, (e.g. PR, PV, and PE).
Poky: Poky is a reference embedded distribution and a reference test configuration. Poky provides the following:
A base-level functional distro used to illustrate how to customize a distribution.
A means by which to test the Yocto Project components (i.e. Poky is used to validate the Yocto Project).
A vehicle through which you can download the Yocto Project.
Poky is not a product level distro. Rather, it is a good starting point for customization.
Note
Poky is an integration layer on top of OE-Core.
Recipe: The most common form of metadata. A recipe contains a list of settings and tasks (i.e. instructions) for building packages that are then used to build the binary image. A recipe describes where you get source code and which patches to apply. Recipes describe dependencies for libraries or for other recipes as well as configuration and compilation options. Related recipes are consolidated into a layer.
3 The Yocto Project Development Environment
This chapter takes a look at the Yocto Project development environment. The chapter provides Yocto Project Development environment concepts that help you understand how work is accomplished in an open source environment, which is very different as compared to work accomplished in a closed, proprietary environment.
Specifically, this chapter addresses open source philosophy, source repositories, workflows, Git, and licensing.
3.1 Open Source Philosophy
Open source philosophy is characterized by software development directed by peer production and collaboration through an active community of developers. Contrast this to the more standard centralized development models used by commercial software companies where a finite set of developers produces a product for sale using a defined set of procedures that ultimately result in an end product whose architecture and source material are closed to the public.
Open source projects conceptually have differing concurrent agendas, approaches, and production. These facets of the development process can come from anyone in the public (community) who has a stake in the software project. The open source environment contains new copyright, licensing, domain, and consumer issues that differ from the more traditional development environment. In an open source environment, the end product, source material, and documentation are all available to the public at no cost.
A benchmark example of an open source project is the Linux kernel, which was initially conceived and created by Finnish computer science student Linus Torvalds in 1991. Conversely, a good example of a non-open source project is the Windows family of operating systems developed by Microsoft Corporation.
Wikipedia has a good historical description of the Open Source Philosophy. You can also find helpful information on how to participate in the Linux Community here.
3.2 The Development Host
A development host or Build Host is key to using the Yocto Project. Because the goal of the Yocto Project is to develop images or applications that run on embedded hardware, development of those images and applications generally takes place on a system not intended to run the software — the development host.
You need to set up a development host in order to use it with the Yocto Project. Most find that it is best to have a native Linux machine function as the development host. However, it is possible to use a system that does not run Linux as its operating system as your development host. When you have a Mac or Windows-based system, you can set it up as the development host by using CROPS, which leverages Docker Containers. Once you take the steps to set up a CROPS machine, you effectively have access to a shell environment that is similar to what you see when using a Linux-based development host. For the steps needed to set up a system using CROPS, see the “Setting Up to Use CROss PlatformS (CROPS)” section in the Yocto Project Development Tasks Manual.
If your development host is going to be a system that runs a Linux distribution, you must still take steps to prepare the system for use with the Yocto Project. You need to be sure that the Linux distribution on the system is one that supports the Yocto Project. You also need to be sure that the correct set of host packages are installed that allow development using the Yocto Project. For the steps needed to set up a development host that runs Linux, see the “Setting Up a Native Linux Host” section in the Yocto Project Development Tasks Manual.
Once your development host is set up to use the Yocto Project, there are several ways of working in the Yocto Project environment:
Command Lines, BitBake, and Shells: Traditional development in the Yocto Project involves using the OpenEmbedded Build System, which uses BitBake, in a command-line environment from a shell on your development host. You can accomplish this from a host that is a native Linux machine or from a host that has been set up with CROPS. Either way, you create, modify, and build images and applications all within a shell-based environment using components and tools available through your Linux distribution and the Yocto Project.
For a general flow of the build procedures, see the “Building a Simple Image” section in the Yocto Project Development Tasks Manual.
Board Support Package (BSP) Development: Development of BSPs involves using the Yocto Project to create and test layers that allow easy development of images and applications targeted for specific hardware. To development BSPs, you need to take some additional steps beyond what was described in setting up a development host.
The Yocto Project Board Support Package Developer’s Guide provides BSP-related development information. For specifics on development host preparation, see the “Preparing Your Build Host to Work With BSP Layers” section in the Yocto Project Board Support Package (BSP) Developer’s Guide.
Kernel Development: If you are going to be developing kernels using the Yocto Project you likely will be using
devtool
. A workflow usingdevtool
makes kernel development quicker by reducing iteration cycle times.The Yocto Project Linux Kernel Development Manual provides kernel-related development information. For specifics on development host preparation, see the “Preparing the Build Host to Work on the Kernel” section in the Yocto Project Linux Kernel Development Manual.
Using Toaster: The other Yocto Project development method that involves an interface that effectively puts the Yocto Project into the background is Toaster. Toaster provides an interface to the OpenEmbedded build system. The interface enables you to configure and run your builds. Information about builds is collected and stored in a database. You can use Toaster to configure and start builds on multiple remote build servers.
For steps that show you how to set up your development host to use Toaster and on how to use Toaster in general, see the Toaster User Manual.
Using the VSCode Extension: You can use the Yocto Project BitBake extension for Visual Studio Code to start your BitBake builds through a graphical user interface.
Learn more about the VSCode Extension on the extension’s marketplace page.
3.3 Yocto Project Source Repositories
The Yocto Project team maintains complete source repositories for all Yocto Project files at https://git.yoctoproject.org/. This web-based source code browser is organized into categories by function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and so forth. From the interface, you can click on any particular item in the “Name” column and see the URL at the bottom of the page that you need to clone a Git repository for that particular item. Having a local Git repository of the Source Directory, which is usually named “poky”, allows you to make changes, contribute to the history, and ultimately enhance the Yocto Project’s tools, Board Support Packages, and so forth.
For any supported release of Yocto Project, you can also go to the
Yocto Project Website and select the “DOWNLOADS”
item from the “SOFTWARE” menu and get a released tarball of the poky
repository, any supported BSP tarball, or Yocto Project tools. Unpacking
these tarballs gives you a snapshot of the released files.
Note
The recommended method for setting up the Yocto Project Source Directory and the files for supported BSPs (e.g.,
meta-intel
) is to use Git to create a local copy of the upstream repositories.Be sure to always work in matching branches for both the selected BSP repository and the Source Directory (i.e.
poky
) repository. For example, if you have checked out the “scarthgap” branch ofpoky
and you are going to usemeta-intel
, be sure to checkout the “scarthgap” branch ofmeta-intel
.
In summary, here is where you can get the project files needed for development:
Source Repositories: This area contains Poky, Yocto documentation, metadata layers, and Linux kernel. You can create local copies of Git repositories for each of these areas.
For steps on how to view and access these upstream Git repositories, see the “Accessing Source Repositories” Section in the Yocto Project Development Tasks Manual.
Yocto release archives: This is where you can download tarballs corresponding to each Yocto Project release. Downloading and extracting these files does not produce a local copy of a Git repository but rather a snapshot corresponding to a particular release.
DOWNLOADS page: The Yocto Project website includes a “DOWNLOADS” page accessible through the “SOFTWARE” menu that allows you to download any Yocto Project release, tool, and Board Support Package (BSP) in tarball form. The hyperlinks point to the tarballs under https://downloads.yoctoproject.org/releases/yocto/.
For steps on how to use the “DOWNLOADS” page, see the “Using the Downloads Page” section in the Yocto Project Development Tasks Manual.
3.4 Git Workflows and the Yocto Project
Developing using the Yocto Project likely requires the use of Git. Git is a free, open source distributed version control system used as part of many collaborative design environments. This section provides workflow concepts using the Yocto Project and Git. In particular, the information covers basic practices that describe roles and actions in a collaborative development environment.
Note
If you are familiar with this type of development environment, you might not want to read this section.
The Yocto Project files are maintained using Git in “branches” whose Git histories track every change and whose structures provide branches for all diverging functionality. Although there is no need to use Git, many open source projects do so.
For the Yocto Project, a key individual called the “maintainer” is responsible for the integrity of the development branch of a given Git repository. The development branch is the “upstream” repository from which final or most recent builds of a project occur. The maintainer is responsible for accepting changes from other developers and for organizing the underlying branch structure to reflect release strategies and so forth.
Note
For information on finding out who is responsible for (maintains) a particular area of code in the Yocto Project, see the “Identify the component” section of the Yocto Project and OpenEmbedded Contributor Guide.
The Yocto Project poky
Git repository also has an upstream
contribution Git repository named poky-contrib
. You can see all the
branches in this repository using the web interface of the
Source Repositories organized within the “Poky Support”
area. These branches hold changes (commits) to the project that have
been submitted or committed by the Yocto Project development team and by
community members who contribute to the project. The maintainer
determines if the changes are qualified to be moved from the “contrib”
branches into the “master” branch of the Git repository.
Developers (including contributing community members) create and maintain cloned repositories of upstream branches. The cloned repositories are local to their development platforms and are used to develop changes. When a developer is satisfied with a particular feature or change, they “push” the change to the appropriate “contrib” repository.
Developers are responsible for keeping their local repository up-to-date with whatever upstream branch they are working against. They are also responsible for straightening out any conflicts that might arise within files that are being worked on simultaneously by more than one person. All this work is done locally on the development host before anything is pushed to a “contrib” area and examined at the maintainer’s level.
There is a somewhat formal method by which developers commit changes and push them into the “contrib” area and subsequently request that the maintainer include them into an upstream branch. This process is called “submitting a patch” or “submitting a change.” For information on submitting patches and changes, see the “Contributing Changes to a Component” section in the Yocto Project and OpenEmbedded Contributor Guide.
In summary, there is a single point of entry for changes into the development branch of the Git repository, which is controlled by the project’s maintainer. A set of developers independently develop, test, and submit changes to “contrib” areas for the maintainer to examine. The maintainer then chooses which changes are going to become a permanent part of the project.
While each development environment is unique, there are some best practices or methods that help development run smoothly. The following list describes some of these practices. For more information about Git workflows, see the workflow topics in the Git Community Book.
Make Small Changes: It is best to keep the changes you commit small as compared to bundling many disparate changes into a single commit. This practice not only keeps things manageable but also allows the maintainer to more easily include or refuse changes.
Make Complete Changes: It is also good practice to leave the repository in a state that allows you to still successfully build your project. In other words, do not commit half of a feature, then add the other half as a separate, later commit. Each commit should take you from one buildable project state to another buildable state.
Use Branches Liberally: It is very easy to create, use, and delete local branches in your working Git repository on the development host. You can name these branches anything you like. It is helpful to give them names associated with the particular feature or change on which you are working. Once you are done with a feature or change and have merged it into your local development branch, simply discard the temporary branch.
Merge Changes: The
git merge
command allows you to take the changes from one branch and fold them into another branch. This process is especially helpful when more than a single developer might be working on different parts of the same feature. Merging changes also automatically identifies any collisions or “conflicts” that might happen as a result of the same lines of code being altered by two different developers.Manage Branches: Because branches are easy to use, you should use a system where branches indicate varying levels of code readiness. For example, you can have a “work” branch to develop in, a “test” branch where the code or change is tested, a “stage” branch where changes are ready to be committed, and so forth. As your project develops, you can merge code across the branches to reflect ever-increasing stable states of the development.
Use Push and Pull: The push-pull workflow is based on the concept of developers “pushing” local commits to a remote repository, which is usually a contribution repository. This workflow is also based on developers “pulling” known states of the project down into their local development repositories. The workflow easily allows you to pull changes submitted by other developers from the upstream repository into your work area ensuring that you have the most recent software on which to develop. The Yocto Project has two scripts named
create-pull-request
andsend-pull-request
that ship with the release to facilitate this workflow. You can find these scripts in thescripts
folder of the Source Directory. For information on how to use these scripts, see the “Using Scripts to Push a Change Upstream and Request a Pull” section in the Yocto Project and OpenEmbedded Contributor Guide.Patch Workflow: This workflow allows you to notify the maintainer through an email that you have a change (or patch) you would like considered for the development branch of the Git repository. To send this type of change, you format the patch and then send the email using the Git commands
git format-patch
andgit send-email
. For information on how to use these scripts, see the “Contributing Changes to a Component” section in the Yocto Project and OpenEmbedded Contributor Guide.
3.5 Git
The Yocto Project makes extensive use of Git, which is a free, open source distributed version control system. Git supports distributed development, non-linear development, and can handle large projects. It is best that you have some fundamental understanding of how Git tracks projects and how to work with Git if you are going to use the Yocto Project for development. This section provides a quick overview of how Git works and provides you with a summary of some essential Git commands.
Note
For more information on Git, see https://git-scm.com/documentation.
If you need to download Git, it is recommended that you add Git to your system through your distribution’s “software store” (e.g. for Ubuntu, use the Ubuntu Software feature). For the Git download page, see https://git-scm.com/download.
For information beyond the introductory nature in this section, see the “Locating Yocto Project Source Files” section in the Yocto Project Development Tasks Manual.
3.5.2 Basic Commands
Git has an extensive set of commands that lets you manage changes and perform collaboration over the life of a project. Conveniently though, you can manage with a small set of basic operations and workflows once you understand the basic philosophy behind Git. You do not have to be an expert in Git to be functional. A good place to look for instruction on a minimal set of Git commands is here.
The following list of Git commands briefly describes some basic Git operations as a way to get started. As with any set of commands, this list (in most cases) simply shows the base command and omits the many arguments it supports. See the Git documentation for complete descriptions and strategies on how to use these commands:
git init: Initializes an empty Git repository. You cannot use Git commands unless you have a
.git
repository.git clone: Creates a local clone of a Git repository that is on equal footing with a fellow developer’s Git repository or an upstream repository.
git add: Locally stages updated file contents to the index that Git uses to track changes. You must stage all files that have changed before you can commit them.
git commit: Creates a local “commit” that documents the changes you made. Only changes that have been staged can be committed. Commits are used for historical purposes, for determining if a maintainer of a project will allow the change, and for ultimately pushing the change from your local Git repository into the project’s upstream repository.
git status: Reports any modified files that possibly need to be staged and gives you a status of where you stand regarding local commits as compared to the upstream repository.
git checkout branch-name: Changes your local working branch and in this form assumes the local branch already exists. This command is analogous to “cd”.
git checkout -b working-branch upstream-branch: Creates and checks out a working branch on your local machine. The local branch tracks the upstream branch. You can use your local branch to isolate your work. It is a good idea to use local branches when adding specific features or changes. Using isolated branches facilitates easy removal of changes if they do not work out.
git branch: Displays the existing local branches associated with your local repository. The branch that you have currently checked out is noted with an asterisk character.
git branch -D branch-name: Deletes an existing local branch. You need to be in a local branch other than the one you are deleting in order to delete branch-name.
git pull --rebase: Retrieves information from an upstream Git repository and places it in your local Git repository. You use this command to make sure you are synchronized with the repository from which you are basing changes (e.g. the “scarthgap” branch). The
--rebase
option ensures that any local commits you have in your branch are preserved at the top of your local branch.git push repo-name local-branch:upstream-branch: Sends all your committed local changes to the upstream Git repository that your local repository is tracking (e.g. a contribution repository). The maintainer of the project draws from these repositories to merge changes (commits) into the appropriate branch of project’s upstream repository.
git merge: Combines or adds changes from one local branch of your repository with another branch. When you create a local Git repository, the default branch may be named “main”. A typical workflow is to create a temporary branch that is based off “main” that you would use for isolated work. You would make your changes in that isolated branch, stage and commit them locally, switch to the “main” branch, and then use the
git merge
command to apply the changes from your isolated branch into the currently checked out branch (e.g. “main”). After the merge is complete and if you are done with working in that isolated branch, you can safely delete the isolated branch.git cherry-pick commits: Choose and apply specific commits from one branch into another branch. There are times when you might not be able to merge all the changes in one branch with another but need to pick out certain ones.
gitk: Provides a GUI view of the branches and changes in your local Git repository. This command is a good way to graphically see where things have diverged in your local repository.
Note
You need to install the gitk package on your development system to use this command.
git log: Reports a history of your commits to the repository. This report lists all commits regardless of whether you have pushed them upstream or not.
git diff: Displays line-by-line differences between a local working file and the same file as understood by Git. This command is useful to see what you have changed in any given file.
3.6 Licensing
Because open source projects are open to the public, they have different licensing structures in place. License evolution for both Open Source and Free Software has an interesting history. If you are interested in this history, you can find basic information here:
In general, the Yocto Project is broadly licensed under the Massachusetts Institute of Technology (MIT) License. MIT licensing permits the reuse of software within proprietary software as long as the license is distributed with that software. Patches to the Yocto Project follow the upstream licensing scheme. You can find information on the MIT license here.
When you build an image using the Yocto Project, the build process uses
a known list of licenses to ensure compliance. You can find this list in
the Source Directory at meta/files/common-licenses
. Once the
build completes, the list of all licenses found and used during that build
are kept in the Build Directory at tmp/deploy/licenses
.
If a module requires a license that is not in the base list, the build process generates a warning during the build. These tools make it easier for a developer to be certain of the licenses with which their shipped products must comply. However, even with these tools it is still up to the developer to resolve potential licensing issues.
The base list of licenses used by the build process is a combination of the Software Package Data Exchange (SPDX) list and the Open Source Initiative (OSI) projects. SPDX Group is a working group of the Linux Foundation that maintains a specification for a standard format for communicating the components, licenses, and copyrights associated with a software package. OSI is a corporation dedicated to the Open Source Definition and the effort for reviewing and approving licenses that conform to the Open Source Definition (OSD).
You can find a list of the combined SPDX and OSI licenses that the Yocto
Project uses in the meta/files/common-licenses
directory in your
Source Directory.
For information that can help you maintain compliance with various open source licensing during the lifecycle of a product created using the Yocto Project, see the “Maintaining Open Source License Compliance During Your Product’s Lifecycle” section in the Yocto Project Development Tasks Manual.
4 Yocto Project Concepts
This chapter provides explanations for Yocto Project concepts that go beyond the surface of “how-to” information and reference (or look-up) material. Concepts such as components, the OpenEmbedded Build System workflow, cross-development toolchains, shared state cache, and so forth are explained.
4.1 Yocto Project Components
The BitBake task executor together with various types of configuration files form the OpenEmbedded-Core (OE-Core). This section overviews these components by describing their use and how they interact.
BitBake handles the parsing and execution of the data files. The data itself is of various types:
Recipes: Provides details about particular pieces of software.
Class Data: Abstracts common build information (e.g. how to build a Linux kernel).
Configuration Data: Defines machine-specific settings, policy decisions, and so forth. Configuration data acts as the glue to bind everything together.
BitBake knows how to combine multiple data sources together and refers to each data source as a layer. For information on layers, see the “Understanding and Creating Layers” section of the Yocto Project Development Tasks Manual.
Here are some brief details on these core components. For additional information on how these components interact during a build, see the “OpenEmbedded Build System Concepts” section.
4.1.1 BitBake
BitBake is the tool at the heart of the OpenEmbedded Build System and is responsible for parsing the Metadata, generating a list of tasks from it, and then executing those tasks.
This section briefly introduces BitBake. If you want more information on BitBake, see the BitBake User Manual.
To see a list of the options BitBake supports, use either of the following commands:
$ bitbake -h
$ bitbake --help
The most common usage for BitBake is bitbake recipename
, where
recipename
is the name of the recipe you want to build (referred
to as the “target”). The target often equates to the first part of a
recipe’s filename (e.g. “foo” for a recipe named foo_1.3.0-r0.bb
).
So, to process the matchbox-desktop_1.2.3.bb
recipe file, you might
type the following:
$ bitbake matchbox-desktop
Several different versions of matchbox-desktop
might exist. BitBake chooses
the one selected by the distribution configuration. You can get more details
about how BitBake chooses between different target versions and providers in the
“Preferences” section
of the BitBake User Manual.
BitBake also tries to execute any dependent tasks first. So for example,
before building matchbox-desktop
, BitBake would build a cross
compiler and glibc
if they had not already been built.
A useful BitBake option to consider is the -k
or --continue
option. This option instructs BitBake to try and continue processing the
job as long as possible even after encountering an error. When an error
occurs, the target that failed and those that depend on it cannot be
remade. However, when you use this option other dependencies can still
be processed.
4.1.2 Recipes
Files that have the .bb
suffix are “recipes” files. In general, a
recipe contains information about a single piece of software. This
information includes the location from which to download the unaltered
source, any source patches to be applied to that source (if needed),
which special configuration options to apply, how to compile the source
files, and how to package the compiled output.
The term “package” is sometimes used to refer to recipes. However, since
the word “package” is used for the packaged output from the OpenEmbedded
build system (i.e. .ipk
or .deb
files), this document avoids
using the term “package” when referring to recipes.
4.1.3 Classes
Class files (.bbclass
) contain information that is useful to share
between recipes files. An example is the autotools* class,
which contains common settings for any application that is built with
the GNU Autotools.
The “Classes” chapter in the Yocto Project
Reference Manual provides details about classes and how to use them.
4.1.4 Configurations
The configuration files (.conf
) define various configuration
variables that govern the OpenEmbedded build process. These files fall
into several areas that define machine configuration options,
distribution configuration options, compiler tuning options, general
common configuration options, and user configuration options in
conf/local.conf
, which is found in the Build Directory.
4.2 Layers
Layers are repositories that contain related metadata (i.e. sets of instructions) that tell the OpenEmbedded build system how to build a target. The Yocto Project Layer Model facilitates collaboration, sharing, customization, and reuse within the Yocto Project development environment. Layers logically separate information for your project. For example, you can use a layer to hold all the configurations for a particular piece of hardware. Isolating hardware-specific configurations allows you to share other metadata by using a different layer where that metadata might be common across several pieces of hardware.
There are many layers working in the Yocto Project development environment. The Yocto Project Compatible Layer Index and OpenEmbedded Layer Index both contain layers from which you can use or leverage.
By convention, layers in the Yocto Project follow a specific form.
Conforming to a known structure allows BitBake to make assumptions
during builds on where to find types of metadata. You can find
procedures and learn about tools (i.e. bitbake-layers
) for creating
layers suitable for the Yocto Project in the
“Understanding and Creating Layers”
section of the Yocto Project Development Tasks Manual.
4.3 OpenEmbedded Build System Concepts
This section takes a more detailed look inside the build process used by the OpenEmbedded Build System, which is the build system specific to the Yocto Project. At the heart of the build system is BitBake, the task executor.
The following diagram represents the high-level workflow of a build. The remainder of this section expands on the fundamental input, output, process, and metadata logical blocks that make up the workflow.
In general, the build’s workflow consists of several functional areas:
User Configuration: metadata you can use to control the build process.
Metadata Layers: Various layers that provide software, machine, and distro metadata.
Source Files: Upstream releases, local projects, and SCMs.
Build System: Processes under the control of BitBake. This block expands on how BitBake fetches source, applies patches, completes compilation, analyzes output for package generation, creates and tests packages, generates images, and generates cross-development tools.
Package Feeds: Directories containing output packages (RPM, DEB or IPK), which are subsequently used in the construction of an image or Software Development Kit (SDK), produced by the build system. These feeds can also be copied and shared using a web server or other means to facilitate extending or updating existing images on devices at runtime if runtime package management is enabled.
Images: Images produced by the workflow.
Application Development SDK: Cross-development tools that are produced along with an image or separately with BitBake.
4.3.1 User Configuration
User configuration helps define the build. Through user configuration, you can tell BitBake the target architecture for which you are building the image, where to store downloaded source, and other build properties.
The following figure shows an expanded representation of the “User Configuration” box of the general workflow figure:
BitBake needs some basic configuration files in order to complete a
build. These files are *.conf
files. The minimally necessary ones
reside as example files in the build/conf
directory of the
Source Directory. For simplicity,
this section refers to the Source Directory as the “Poky Directory.”
When you clone the Poky Git repository
or you download and unpack a Yocto Project release, you can set up the
Source Directory to be named anything you want. For this discussion, the
cloned repository uses the default name poky
.
Note
The Poky repository is primarily an aggregation of existing repositories. It is not a canonical upstream source.
The meta-poky
layer inside Poky contains a conf
directory that
has example configuration files. These example files are used as a basis
for creating actual configuration files when you source
oe-init-build-env, which is the
build environment script.
Sourcing the build environment script creates a Build Directory
if one does not already exist. BitBake uses the Build Directory
for all its work during builds. The Build Directory has a conf
directory
that contains default versions of your local.conf
and bblayers.conf
configuration files. These default configuration files are created only
if versions do not already exist in the Build Directory at the time you
source the build environment setup script.
Because the Poky repository is fundamentally an aggregation of existing repositories, some users might be familiar with running the oe-init-build-env script in the context of separate OpenEmbedded-Core (OE-Core) and BitBake repositories rather than a single Poky repository. This discussion assumes the script is executed from within a cloned or unpacked version of Poky.
Depending on where the script is sourced, different sub-scripts are
called to set up the Build Directory (Yocto or OpenEmbedded).
Specifically, the script scripts/oe-setup-builddir
inside the poky
directory sets up the Build Directory and seeds the directory (if
necessary) with configuration files appropriate for the Yocto Project
development environment.
Note
The
scripts/oe-setup-builddir
script uses the
$TEMPLATECONF
variable to determine which sample configuration files to locate.
The local.conf
file provides many basic variables that define a
build environment. Here is a list of a few. To see the default
configurations in a local.conf
file created by the build environment
script, see the
local.conf.sample
in the meta-poky
layer:
Target Machine Selection: Controlled by the MACHINE variable.
Download Directory: Controlled by the DL_DIR variable.
Shared State Directory: Controlled by the SSTATE_DIR variable.
Build Output: Controlled by the TMPDIR variable.
Distribution Policy: Controlled by the DISTRO variable.
Packaging Format: Controlled by the PACKAGE_CLASSES variable.
SDK Target Architecture: Controlled by the SDKMACHINE variable.
Extra Image Packages: Controlled by the EXTRA_IMAGE_FEATURES variable.
Note
Configurations set in the conf/local.conf
file can also be set
in the conf/site.conf
and conf/auto.conf
configuration files.
The bblayers.conf
file tells BitBake what layers you want considered
during the build. By default, the layers listed in this file include
layers minimally needed by the build system. However, you must manually
add any custom layers you have created. You can find more information on
working with the bblayers.conf
file in the
“Enabling Your Layer”
section in the Yocto Project Development Tasks Manual.
The files site.conf
and auto.conf
are not created by the
environment initialization script. If you want the site.conf
file,
you need to create it yourself. The auto.conf
file is typically
created by an autobuilder:
site.conf: You can use the
conf/site.conf
configuration file to configure multiple build directories. For example, suppose you had several build environments and they shared some common features. You can set these default build properties here. A good example is perhaps the packaging format to use through the PACKAGE_CLASSES variable.auto.conf: The file is usually created and written to by an autobuilder. The settings put into the file are typically the same as you would find in the
conf/local.conf
or theconf/site.conf
files.
You can edit all configuration files to further define any particular build environment. This process is represented by the “User Configuration Edits” box in the figure.
When you launch your build with the bitbake target
command, BitBake
sorts out the configurations to ultimately define your build
environment. It is important to understand that the
OpenEmbedded Build System reads the
configuration files in a specific order: site.conf
, auto.conf
,
and local.conf
. And, the build system applies the normal assignment
statement rules as described in the
“Syntax and Operators” chapter
of the BitBake User Manual. Because the files are parsed in a specific
order, variable assignments for the same variable could be affected. For
example, if the auto.conf
file and the local.conf
set variable1
to different values, because the build system parses local.conf
after auto.conf
, variable1 is assigned the value from the
local.conf
file.
4.3.2 Metadata, Machine Configuration, and Policy Configuration
The previous section described the user configurations that define BitBake’s global behavior. This section takes a closer look at the layers the build system uses to further control the build. These layers provide Metadata for the software, machine, and policies.
In general, there are three types of layer input. You can see them below the “User Configuration” box in the general workflow figure <overview-manual/concepts:openembedded build system concepts>:
Metadata (.bb + Patches): Software layers containing user-supplied recipe files, patches, and append files. A good example of a software layer might be the meta-qt5 layer from the OpenEmbedded Layer Index. This layer is for version 5.0 of the popular Qt cross-platform application development framework for desktop, embedded and mobile.
Machine BSP Configuration: Board Support Package (BSP) layers (i.e. “BSP Layer” in the following figure) providing machine-specific configurations. This type of information is specific to a particular target architecture. A good example of a BSP layer from the Reference Distribution (Poky) is the meta-yocto-bsp layer.
Policy Configuration: Distribution Layers (i.e. “Distro Layer” in the following figure) providing top-level or general policies for the images or SDKs being built for a particular distribution. For example, in the Poky Reference Distribution the distro layer is the meta-poky layer. Within the distro layer is a
conf/distro
directory that contains distro configuration files (e.g. poky.conf that contain many policy configurations for the Poky distribution.
The following figure shows an expanded representation of these three layers from the general workflow figure:
In general, all layers have a similar structure. They all contain a
licensing file (e.g. COPYING.MIT
) if the layer is to be distributed,
a README
file as good practice and especially if the layer is to be
distributed, a configuration directory, and recipe directories. You can
learn about the general structure for layers used with the Yocto Project
in the
“Creating Your Own Layer”
section in the
Yocto Project Development Tasks Manual. For a general discussion on
layers and the many layers from which you can draw, see the
“Layers” and
“The Yocto Project Layer Model” sections both
earlier in this manual.
If you explored the previous links, you discovered some areas where many layers that work with the Yocto Project exist. The Source Repositories also shows layers categorized under “Yocto Metadata Layers.”
Note
There are layers in the Yocto Project Source Repositories that cannot be found in the OpenEmbedded Layer Index. Such layers are either deprecated or experimental in nature.
BitBake uses the conf/bblayers.conf
file, which is part of the user
configuration, to find what layers it should be using as part of the
build.
4.3.2.1 Distro Layer
The distribution layer provides policy configurations for your
distribution. Best practices dictate that you isolate these types of
configurations into their own layer. Settings you provide in
conf/distro/distro.conf
override similar settings that BitBake finds
in your conf/local.conf
file in the Build Directory.
The following list provides some explanation and references for what you typically find in the distribution layer:
classes: Class files (
.bbclass
) hold common functionality that can be shared among recipes in the distribution. When your recipes inherit a class, they take on the settings and functions for that class. You can read more about class files in the “Classes” chapter of the Yocto Reference Manual.conf: This area holds configuration files for the layer (
conf/layer.conf
), the distribution (conf/distro/distro.conf
), and any distribution-wide include files.recipes-:* Recipes and append files that affect common functionality across the distribution. This area could include recipes and append files to add distribution-specific configuration, initialization scripts, custom image recipes, and so forth. Examples of
recipes-*
directories arerecipes-core
andrecipes-extra
. Hierarchy and contents within arecipes-*
directory can vary. Generally, these directories contain recipe files (*.bb
), recipe append files (*.bbappend
), directories that are distro-specific for configuration files, and so forth.
4.3.2.2 BSP Layer
The BSP Layer provides machine configurations that target specific hardware. Everything in this layer is specific to the machine for which you are building the image or the SDK. A common structure or form is defined for BSP layers. You can learn more about this structure in the Yocto Project Board Support Package Developer’s Guide.
Note
In order for a BSP layer to be considered compliant with the Yocto Project, it must meet some structural requirements.
The BSP Layer’s configuration directory contains configuration files for
the machine (conf/machine/machine.conf
) and, of course, the layer
(conf/layer.conf
).
The remainder of the layer is dedicated to specific recipes by function:
recipes-bsp
, recipes-core
, recipes-graphics
,
recipes-kernel
, and so forth. There can be metadata for multiple
formfactors, graphics support systems, and so forth.
Note
While the figure shows several recipes-* directories, not all these directories appear in all BSP layers.
4.3.2.3 Software Layer
The software layer provides the Metadata for additional software packages used during the build. This layer does not include Metadata that is specific to the distribution or the machine, which are found in their respective layers.
This layer contains any recipes, append files, and patches, that your project needs.
4.3.3 Sources
In order for the OpenEmbedded build system to create an image or any target, it must be able to access source files. The general workflow figure represents source files using the “Upstream Project Releases”, “Local Projects”, and “SCMs (optional)” boxes. The figure represents mirrors, which also play a role in locating source files, with the “Source Materials” box.
The method by which source files are ultimately organized is a function of the project. For example, for released software, projects tend to use tarballs or other archived files that can capture the state of a release guaranteeing that it is statically represented. On the other hand, for a project that is more dynamic or experimental in nature, a project might keep source files in a repository controlled by a Source Control Manager (SCM) such as Git. Pulling source from a repository allows you to control the point in the repository (the revision) from which you want to build software. A combination of the two is also possible.
BitBake uses the SRC_URI variable to point to source files regardless of their location. Each recipe must have a SRC_URI variable that points to the source.
Another area that plays a significant role in where source files come from is pointed to by the DL_DIR variable. This area is a cache that can hold previously downloaded source. You can also instruct the OpenEmbedded build system to create tarballs from Git repositories, which is not the default behavior, and store them in the DL_DIR by using the BB_GENERATE_MIRROR_TARBALLS variable.
Judicious use of a DL_DIR directory can save the build system a trip across the Internet when looking for files. A good method for using a download directory is to have DL_DIR point to an area outside of your Build Directory. Doing so allows you to safely delete the Build Directory if needed without fear of removing any downloaded source file.
The remainder of this section provides a deeper look into the source files and the mirrors. Here is a more detailed look at the source file area of the general workflow figure:
4.3.3.1 Upstream Project Releases
Upstream project releases exist anywhere in the form of an archived file (e.g. tarball or zip file). These files correspond to individual recipes. For example, the figure uses specific releases each for BusyBox, Qt, and Dbus. An archive file can be for any released product that can be built using a recipe.
4.3.3.2 Local Projects
Local projects are custom bits of software the user provides. These bits reside somewhere local to a project — perhaps a directory into which the user checks in items (e.g. a local directory containing a development source tree used by the group).
The canonical method through which to include a local project is to use the
externalsrc class to include that local project. You use
either the local.conf
or a recipe’s append file to override or set the
recipe to point to the local directory on your disk to pull in the whole
source tree.
4.3.3.3 Source Control Managers (Optional)
Another place from which the build system can get source files is with Fetchers employing various Source Control Managers (SCMs) such as Git or Subversion. In such cases, a repository is cloned or checked out. The do_fetch task inside BitBake uses the SRC_URI variable and the argument’s prefix to determine the correct fetcher module.
Note
For information on how to have the OpenEmbedded build system generate tarballs for Git repositories and place them in the DL_DIR directory, see the BB_GENERATE_MIRROR_TARBALLS variable in the Yocto Project Reference Manual.
When fetching a repository, BitBake uses the SRCREV variable to determine the specific revision from which to build.
4.3.3.4 Source Mirror(s)
There are two kinds of mirrors: pre-mirrors and regular mirrors. The PREMIRRORS and MIRRORS variables point to these, respectively. BitBake checks pre-mirrors before looking upstream for any source files. Pre-mirrors are appropriate when you have a shared directory that is not a directory defined by the DL_DIR variable. A Pre-mirror typically points to a shared directory that is local to your organization.
Regular mirrors can be any site across the Internet that is used as an alternative location for source code should the primary site not be functioning for some reason or another.
4.3.4 Package Feeds
When the OpenEmbedded build system generates an image or an SDK, it gets the packages from a package feed area located in the Build Directory. The general workflow figure shows this package feeds area in the upper-right corner.
This section looks a little closer into the package feeds area used by the build system. Here is a more detailed look at the area:
Package feeds are an intermediary step in the build process. The OpenEmbedded build system provides classes to generate different package types, and you specify which classes to enable through the PACKAGE_CLASSES variable. Before placing the packages into package feeds, the build process validates them with generated output quality assurance checks through the insane class.
The package feed area resides in the Build Directory. The directory the build system uses to temporarily store packages is determined by a combination of variables and the particular package manager in use. See the “Package Feeds” box in the illustration and note the information to the right of that area. In particular, the following defines where package files are kept:
DEPLOY_DIR: Defined as
tmp/deploy
in the Build Directory.DEPLOY_DIR_*
: Depending on the package manager used, the package type sub-folder. Given RPM, IPK, or DEB packaging and tarball creation, the DEPLOY_DIR_RPM, DEPLOY_DIR_IPK, or DEPLOY_DIR_DEB variables are used, respectively.PACKAGE_ARCH: Defines architecture-specific sub-folders. For example, packages could be available for the i586 or qemux86 architectures.
BitBake uses the
do_package_write_*
tasks to generate packages and place them into the package holding area
(e.g. do_package_write_ipk
for IPK packages). See the
“do_package_write_deb”,
“do_package_write_ipk”,
and
“do_package_write_rpm”
sections in the Yocto Project Reference Manual for additional
information. As an example, consider a scenario where an IPK packaging
manager is being used and there is package architecture support for both
i586 and qemux86. Packages for the i586 architecture are placed in
build/tmp/deploy/ipk/i586
, while packages for the qemux86
architecture are placed in build/tmp/deploy/ipk/qemux86
.
4.3.5 BitBake Tool
The OpenEmbedded build system uses BitBake to produce images and Software Development Kits (SDKs). You can see from the general workflow figure, the BitBake area consists of several functional areas. This section takes a closer look at each of those areas.
Note
Documentation for the BitBake tool is available separately. See the BitBake User Manual for reference material on BitBake.
4.3.5.1 Source Fetching
The first stages of building a recipe are to fetch and unpack the source code:
The do_fetch and do_unpack tasks fetch the source files and unpack them into the Build Directory.
Note
For every local file (e.g. file://
) that is part of a recipe’s
SRC_URI statement, the OpenEmbedded build system takes a
checksum of the file for the recipe and inserts the checksum into
the signature for the do_fetch task. If any local
file has been modified, the do_fetch task and all
tasks that depend on it are re-executed.
By default, everything is accomplished in the Build Directory, which has a defined structure. For additional general information on the Build Directory, see the “build/” section in the Yocto Project Reference Manual.
Each recipe has an area in the Build Directory where the unpacked source code resides. The S variable points to this area for a recipe’s unpacked source code. The name of that directory for any given recipe is defined from several different variables. The preceding figure and the following list describe the Build Directory’s hierarchy:
TMPDIR: The base directory where the OpenEmbedded build system performs all its work during the build. The default base directory is the
tmp
directory.PACKAGE_ARCH: The architecture of the built package or packages. Depending on the eventual destination of the package or packages (i.e. machine architecture, Build Host, SDK, or specific machine), PACKAGE_ARCH varies. See the variable’s description for details.
TARGET_OS: The operating system of the target device. A typical value would be “linux” (e.g. “qemux86-poky-linux”).
PN: The name of the recipe used to build the package. This variable can have multiple meanings. However, when used in the context of input files, PN represents the name of the recipe.
WORKDIR: The location where the OpenEmbedded build system builds a recipe (i.e. does the work to create the package).
S: Contains the unpacked source files for a given recipe.
Note
In the previous figure, notice that there are two sample hierarchies: one based on package architecture (i.e. PACKAGE_ARCH) and one based on a machine (i.e. MACHINE). The underlying structures are identical. The differentiator being what the OpenEmbedded build system is using as a build target (e.g. general architecture, a build host, an SDK, or a specific machine).
4.3.5.2 Patching
Once source code is fetched and unpacked, BitBake locates patch files and applies them to the source files:
The do_patch task uses a recipe’s SRC_URI statements and the FILESPATH variable to locate applicable patch files.
Default processing for patch files assumes the files have either
*.patch
or *.diff
file types. You can use SRC_URI parameters
to change the way the build system recognizes patch files. See the
do_patch task for more
information.
BitBake finds and applies multiple patches for a single recipe in the order in which it locates the patches. The FILESPATH variable defines the default set of directories that the build system uses to search for patch files. Once found, patches are applied to the recipe’s source files, which are located in the S directory.
For more information on how the source directories are created, see the “Source Fetching” section. For more information on how to create patches and how the build system processes patches, see the “Patching Code” section in the Yocto Project Development Tasks Manual. You can also see the “Use devtool modify to Modify the Source of an Existing Component” section in the Yocto Project Application Development and the Extensible Software Development Kit (SDK) manual and the “Using Traditional Kernel Development to Patch the Kernel” section in the Yocto Project Linux Kernel Development Manual.
4.3.5.3 Configuration, Compilation, and Staging
After source code is patched, BitBake executes tasks that configure and compile the source code. Once compilation occurs, the files are copied to a holding area (staged) in preparation for packaging:
This step in the build process consists of the following tasks:
do_prepare_recipe_sysroot: This task sets up the two sysroots in
${
WORKDIR}
(i.e.recipe-sysroot
andrecipe-sysroot-native
) so that during the packaging phase the sysroots can contain the contents of the do_populate_sysroot tasks of the recipes on which the recipe containing the tasks depends. A sysroot exists for both the target and for the native binaries, which run on the host system.do_configure: This task configures the source by enabling and disabling any build-time and configuration options for the software being built. Configurations can come from the recipe itself as well as from an inherited class. Additionally, the software itself might configure itself depending on the target for which it is being built.
The configurations handled by the do_configure task are specific to configurations for the source code being built by the recipe.
If you are using the autotools* class, you can add additional configuration options by using the EXTRA_OECONF or PACKAGECONFIG_CONFARGS variables. For information on how this variable works within that class, see the autotools* class here.
do_compile: Once a configuration task has been satisfied, BitBake compiles the source using the do_compile task. Compilation occurs in the directory pointed to by the B variable. Realize that the B directory is, by default, the same as the S directory.
do_install: After compilation completes, BitBake executes the do_install task. This task copies files from the B directory and places them in a holding area pointed to by the D variable. Packaging occurs later using files from this holding directory.
4.3.5.4 Package Splitting
After source code is configured, compiled, and staged, the build system analyzes the results and splits the output into packages:
The do_package and do_packagedata tasks combine to analyze the files found in the D directory and split them into subsets based on available packages and files. Analysis involves the following as well as other items: splitting out debugging symbols, looking at shared library dependencies between packages, and looking at package relationships.
The do_packagedata task creates package metadata based on the analysis such that the build system can generate the final packages. The do_populate_sysroot task stages (copies) a subset of the files installed by the do_install task into the appropriate sysroot. Working, staged, and intermediate results of the analysis and package splitting process use several areas:
PKGD: The destination directory (i.e.
package
) for packages before they are split into individual packages.PKGDESTWORK: A temporary work area (i.e.
pkgdata
) used by the do_package task to save package metadata.PKGDEST: The parent directory (i.e.
packages-split
) for packages after they have been split.PKGDATA_DIR: A shared, global-state directory that holds packaging metadata generated during the packaging process. The packaging process copies metadata from PKGDESTWORK to the PKGDATA_DIR area where it becomes globally available.
STAGING_DIR_HOST: The path for the sysroot for the system on which a component is built to run (i.e.
recipe-sysroot
).STAGING_DIR_NATIVE: The path for the sysroot used when building components for the build host (i.e.
recipe-sysroot-native
).STAGING_DIR_TARGET: The path for the sysroot used when a component that is built to execute on a system and it generates code for yet another machine (e.g. cross-canadian recipes).
The FILES variable defines the files that go into each package in PACKAGES. If you want details on how this is accomplished, you can look at package.bbclass.
Depending on the type of packages being created (RPM, DEB, or IPK), the
do_package_write_*
task creates the actual packages and places them in the Package Feed
area, which is ${TMPDIR}/deploy
. You can see the
“Package Feeds” section for more detail on
that part of the build process.
Note
Support for creating feeds directly from the deploy/*
directories does not exist. Creating such feeds usually requires some
kind of feed maintenance mechanism that would upload the new packages
into an official package feed (e.g. the Ångström distribution). This
functionality is highly distribution-specific and thus is not
provided out of the box.
4.3.5.5 Image Generation
Once packages are split and stored in the Package Feeds area, the build system uses BitBake to generate the root filesystem image:
The image generation process consists of several stages and depends on several tasks and variables. The do_rootfs task creates the root filesystem (file and directory structure) for an image. This task uses several key variables to help create the list of packages to actually install:
IMAGE_INSTALL: Lists out the base set of packages from which to install from the Package Feeds area.
PACKAGE_EXCLUDE: Specifies packages that should not be installed into the image.
IMAGE_FEATURES: Specifies features to include in the image. Most of these features map to additional packages for installation.
PACKAGE_CLASSES: Specifies the package backend (e.g. RPM, DEB, or IPK) to use and consequently helps determine where to locate packages within the Package Feeds area.
IMAGE_LINGUAS: Determines the language(s) for which additional language support packages are installed.
PACKAGE_INSTALL: The final list of packages passed to the package manager for installation into the image.
With IMAGE_ROOTFS pointing to the location of the filesystem under construction and the PACKAGE_INSTALL variable providing the final list of packages to install, the root file system is created.
Package installation is under control of the package manager (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or not package management is enabled for the target. At the end of the process, if package management is not enabled for the target, the package manager’s data files are deleted from the root filesystem. As part of the final stage of package installation, post installation scripts that are part of the packages are run. Any scripts that fail to run on the build host are run on the target when the target system is first booted. If you are using a read-only root filesystem, all the post installation scripts must succeed on the build host during the package installation phase since the root filesystem on the target is read-only.
The final stages of the do_rootfs task handle post processing. Post processing includes creation of a manifest file and optimizations.
The manifest file (.manifest
) resides in the same directory as the root
filesystem image. This file lists out, line-by-line, the installed packages.
The manifest file is useful for the testimage class,
for example, to determine whether or not to run specific tests. See the
IMAGE_MANIFEST variable for additional information.
Optimizing processes that are run across the image include mklibs
and any other post-processing commands as defined by the
ROOTFS_POSTPROCESS_COMMAND
variable. The mklibs
process optimizes the size of the libraries.
After the root filesystem is built, processing begins on the image through the do_image task. The build system runs any pre-processing commands as defined by the IMAGE_PREPROCESS_COMMAND variable. This variable specifies a list of functions to call before the build system creates the final image output files.
The build system dynamically creates do_image_* tasks as needed, based on the image types specified in the IMAGE_FSTYPES variable. The process turns everything into an image file or a set of image files and can compress the root filesystem image to reduce the overall size of the image. The formats used for the root filesystem depend on the IMAGE_FSTYPES variable. Compression depends on whether the formats support compression.
As an example, a dynamically created task when creating a particular image type would take the following form:
do_image_type
So, if the type
as specified by the IMAGE_FSTYPES were ext4
, the dynamically
generated task would be as follows:
do_image_ext4
The final task involved in image creation is the do_image_complete task. This task completes the image by applying any image post processing as defined through the IMAGE_POSTPROCESS_COMMAND variable. The variable specifies a list of functions to call once the build system has created the final image output files.
Note
The entire image generation process is run under Pseudo. Running under Pseudo ensures that the files in the root filesystem have correct ownership.
4.3.5.6 SDK Generation
The OpenEmbedded build system uses BitBake to generate the Software Development Kit (SDK) installer scripts for both the standard SDK and the extensible SDK (eSDK):
Note
For more information on the cross-development toolchain generation, see the “Cross-Development Toolchain Generation” section. For information on advantages gained when building a cross-development toolchain using the do_populate_sdk task, see the “Building an SDK Installer” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
Like image generation, the SDK script process consists of several stages and depends on many variables. The do_populate_sdk and do_populate_sdk_ext tasks use these key variables to help create the list of packages to actually install. For information on the variables listed in the figure, see the “Application Development SDK” section.
The do_populate_sdk task helps create the standard SDK and handles two parts: a target part and a host part. The target part is the part built for the target hardware and includes libraries and headers. The host part is the part of the SDK that runs on the SDKMACHINE.
The do_populate_sdk_ext task helps create the extensible SDK and handles host and target parts differently than its counter part does for the standard SDK. For the extensible SDK, the task encapsulates the build system, which includes everything needed (host and target) for the SDK.
Regardless of the type of SDK being constructed, the tasks perform some
cleanup after which a cross-development environment setup script and any
needed configuration files are created. The final output is the
Cross-development toolchain installation script (.sh
file), which
includes the environment setup script.
4.3.5.7 Stamp Files and the Rerunning of Tasks
For each task that completes successfully, BitBake writes a stamp file into the STAMPS_DIR directory. The beginning of the stamp file’s filename is determined by the STAMP variable, and the end of the name consists of the task’s name and current input checksum.
Note
This naming scheme assumes that BB_SIGNATURE_HANDLER is “OEBasicHash”, which is almost always the case in current OpenEmbedded.
To determine if a task needs to be rerun, BitBake checks if a stamp file with a matching input checksum exists for the task. In this case, the task’s output is assumed to exist and still be valid. Otherwise, the task is rerun.
Note
The stamp mechanism is more general than the shared state (sstate) cache mechanism described in the “Setscene Tasks and Shared State” section. BitBake avoids rerunning any task that has a valid stamp file, not just tasks that can be accelerated through the sstate cache.
However, you should realize that stamp files only serve as a marker that some work has been done and that these files do not record task output. The actual task output would usually be somewhere in TMPDIR (e.g. in some recipe’s WORKDIR.) What the sstate cache mechanism adds is a way to cache task output that can then be shared between build machines.
Since STAMPS_DIR is usually a subdirectory of TMPDIR, removing TMPDIR will also remove STAMPS_DIR, which means tasks will properly be rerun to repopulate TMPDIR.
If you want some task to always be considered “out of date”, you can mark it with the nostamp varflag. If some other task depends on such a task, then that task will also always be considered out of date, which might not be what you want.
For details on how to view information about a task’s signature, see the “Viewing Task Variable Dependencies” section in the Yocto Project Development Tasks Manual.
4.3.6 Images
The images produced by the build system are compressed forms of the root filesystem and are ready to boot on a target device. You can see from the general workflow figure that BitBake output, in part, consists of images. This section takes a closer look at this output:
Note
For a list of example images that the Yocto Project provides, see the “Images” chapter in the Yocto Project Reference Manual.
The build process writes images out to the Build Directory inside
the tmp/deploy/images/machine/
folder as shown in the figure. This
folder contains any files expected to be loaded on the target device.
The DEPLOY_DIR variable points to the deploy
directory, while the
DEPLOY_DIR_IMAGE variable points to the appropriate directory
containing images for the current configuration.
kernel-image: A kernel binary file. The KERNEL_IMAGETYPE variable determines the naming scheme for the kernel image file. Depending on this variable, the file could begin with a variety of naming strings. The
deploy/images/
machine directory can contain multiple image files for the machine.root-filesystem-image: Root filesystems for the target device (e.g.
*.ext3
or*.bz2
files). The IMAGE_FSTYPES variable determines the root filesystem image type. Thedeploy/images/
machine directory can contain multiple root filesystems for the machine.kernel-modules: Tarballs that contain all the modules built for the kernel. Kernel module tarballs exist for legacy purposes and can be suppressed by setting the MODULE_TARBALL_DEPLOY variable to “0”. The
deploy/images/
machine directory can contain multiple kernel module tarballs for the machine.bootloaders: If applicable to the target machine, bootloaders supporting the image. The
deploy/images/
machine directory can contain multiple bootloaders for the machine.symlinks: The
deploy/images/
machine folder contains a symbolic link that points to the most recently built file for each machine. These links might be useful for external scripts that need to obtain the latest version of each file.
4.3.7 Application Development SDK
In the general workflow figure, the
output labeled “Application Development SDK” represents an SDK. The SDK
generation process differs depending on whether you build an extensible
SDK (e.g. bitbake -c populate_sdk_ext
imagename) or a standard SDK
(e.g. bitbake -c populate_sdk
imagename). This section takes a
closer look at this output:
The specific form of this output is a set of files that includes a
self-extracting SDK installer (*.sh
), host and target manifest
files, and files used for SDK testing. When the SDK installer file is
run, it installs the SDK. The SDK consists of a cross-development
toolchain, a set of libraries and headers, and an SDK environment setup
script. Running this installer essentially sets up your
cross-development environment. You can think of the cross-toolchain as
the “host” part because it runs on the SDK machine. You can think of the
libraries and headers as the “target” part because they are built for
the target hardware. The environment setup script is added so that you
can initialize the environment before using the tools.
Note
The Yocto Project supports several methods by which you can set up this cross-development environment. These methods include downloading pre-built SDK installers or building and installing your own SDK installer.
For background information on cross-development toolchains in the Yocto Project development environment, see the “Cross-Development Toolchain Generation” section.
For information on setting up a cross-development environment, see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
All the output files for an SDK are written to the deploy/sdk
folder
inside the Build Directory as shown in the previous figure. Depending
on the type of SDK, there are several variables to configure these files.
The variables associated with an extensible SDK are:
DEPLOY_DIR: Points to the
deploy
directory.SDK_EXT_TYPE: Controls whether or not shared state artifacts are copied into the extensible SDK. By default, all required shared state artifacts are copied into the SDK.
SDK_INCLUDE_PKGDATA: Specifies whether or not packagedata is included in the extensible SDK for all recipes in the “world” target.
SDK_INCLUDE_TOOLCHAIN: Specifies whether or not the toolchain is included when building the extensible SDK.
ESDK_LOCALCONF_ALLOW: A list of variables allowed through from the build system configuration into the extensible SDK configuration.
ESDK_LOCALCONF_REMOVE: A list of variables not allowed through from the build system configuration into the extensible SDK configuration.
ESDK_CLASS_INHERIT_DISABLE: A list of classes to remove from the INHERIT value globally within the extensible SDK configuration.
This next list, shows the variables associated with a standard SDK:
DEPLOY_DIR: Points to the
deploy
directory.SDKMACHINE: Specifies the architecture of the machine on which the cross-development tools are run to create packages for the target hardware.
SDKIMAGE_FEATURES: Lists the features to include in the “target” part of the SDK.
TOOLCHAIN_HOST_TASK: Lists packages that make up the host part of the SDK (i.e. the part that runs on the SDKMACHINE). When you use
bitbake -c populate_sdk imagename
to create the SDK, a set of default packages apply. This variable allows you to add more packages.TOOLCHAIN_TARGET_TASK: Lists packages that make up the target part of the SDK (i.e. the part built for the target hardware).
SDKPATHINSTALL: Defines the default SDK installation path offered by the installation script.
SDK_HOST_MANIFEST: Lists all the installed packages that make up the host part of the SDK. This variable also plays a minor role for extensible SDK development as well. However, it is mainly used for the standard SDK.
SDK_TARGET_MANIFEST: Lists all the installed packages that make up the target part of the SDK. This variable also plays a minor role for extensible SDK development as well. However, it is mainly used for the standard SDK.
4.4 Cross-Development Toolchain Generation
The Yocto Project does most of the work for you when it comes to creating The Cross-Development Toolchain. This section provides some technical background on how cross-development toolchains are created and used. For more information on toolchains, you can also see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
In the Yocto Project development environment, cross-development toolchains are used to build images and applications that run on the target hardware. With just a few commands, the OpenEmbedded build system creates these necessary toolchains for you.
The following figure shows a high-level build environment regarding toolchain construction and use.
Most of the work occurs on the Build Host. This is the machine used to
build images and generally work within the Yocto Project
environment. When you run
BitBake to create an image, the
OpenEmbedded build system uses the host gcc
compiler to bootstrap a
cross-compiler named gcc-cross
. The gcc-cross
compiler is what
BitBake uses to compile source files when creating the target image. You
can think of gcc-cross
simply as an automatically generated
cross-compiler that is used internally within BitBake only.
Note
The extensible SDK does not use gcc-cross-canadian
since this SDK ships a copy of the OpenEmbedded build system and the
sysroot within it contains gcc-cross
.
The chain of events that occurs when the standard toolchain is bootstrapped:
binutils-cross -> linux-libc-headers -> gcc-cross -> libgcc-initial -> glibc -> libgcc -> gcc-runtime
gcc
: The compiler, GNU Compiler Collection (GCC).binutils-cross
: The binary utilities needed in order to run thegcc-cross
phase of the bootstrap operation and build the headers for the C library.linux-libc-headers
: Headers needed for the cross-compiler and C library build.libgcc-initial
: An initial version of the gcc support library needed to bootstrapglibc
.libgcc
: The final version of the gcc support library which can only be built once there is a C library to link against.glibc
: The GNU C Library.gcc-cross
: The final stage of the bootstrap process for the cross-compiler. This stage results in the actual cross-compiler that BitBake uses when it builds an image for a targeted device.This tool is a “native” tool (i.e. it is designed to run on the build host).
gcc-runtime
: Runtime libraries resulting from the toolchain bootstrapping process. This tool produces a binary that consists of the runtime libraries need for the targeted device.
You can use the OpenEmbedded build system to build an installer for the
relocatable SDK used to develop applications. When you run the
installer, it installs the toolchain, which contains the development
tools (e.g., gcc-cross-canadian
, binutils-cross-canadian
, and
other nativesdk-*
tools), which are tools native to the SDK (i.e.
native to SDK_ARCH), you need to cross-compile and test your
software. The figure shows the commands you use to easily build out
this toolchain. This cross-development toolchain is built to execute on the
SDKMACHINE, which might or might not be the same machine as
the Build Host.
Note
If your target architecture is supported by the Yocto Project, you can take advantage of pre-built images that ship with the Yocto Project and already contain cross-development toolchain installers.
Here is the bootstrap process for the relocatable toolchain:
gcc -> binutils-crosssdk -> gcc-crosssdk-initial -> linux-libc-headers -> glibc-initial -> nativesdk-glibc -> gcc-crosssdk -> gcc-cross-canadian
gcc
: The build host’s GNU Compiler Collection (GCC).binutils-crosssdk
: The bare minimum binary utilities needed in order to run thegcc-crosssdk-initial
phase of the bootstrap operation.gcc-crosssdk-initial
: An early stage of the bootstrap process for creating the cross-compiler. This stage builds enough of thegcc-crosssdk
and supporting pieces so that the final stage of the bootstrap process can produce the finished cross-compiler. This tool is a “native” binary that runs on the build host.linux-libc-headers
: Headers needed for the cross-compiler.glibc-initial
: An initial version of the Embedded GLIBC needed to bootstrapnativesdk-glibc
.nativesdk-glibc
: The Embedded GLIBC needed to bootstrap thegcc-crosssdk
.gcc-crosssdk
: The final stage of the bootstrap process for the relocatable cross-compiler. Thegcc-crosssdk
is a transitory compiler and never leaves the build host. Its purpose is to help in the bootstrap process to create the eventualgcc-cross-canadian
compiler, which is relocatable. This tool is also a “native” package (i.e. it is designed to run on the build host).gcc-cross-canadian
: The final relocatable cross-compiler. When run on the SDKMACHINE, this tool produces executable code that runs on the target device. Only one cross-canadian compiler is produced per architecture since they can be targeted at different processor optimizations using configurations passed to the compiler through the compile commands. This circumvents the need for multiple compilers and thus reduces the size of the toolchains.
Note
For information on advantages gained when building a cross-development toolchain installer, see the “Building an SDK Installer” appendix in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
4.6 Automatically Added Runtime Dependencies
The OpenEmbedded build system automatically adds common types of runtime
dependencies between packages, which means that you do not need to
explicitly declare the packages using
RDEPENDS. There are three automatic
mechanisms (shlibdeps
, pcdeps
, and depchains
) that
handle shared libraries, package configuration (pkg-config) modules, and
-dev
and -dbg
packages, respectively. For other types of runtime
dependencies, you must manually declare the dependencies.
shlibdeps
: During the do_package task of each recipe, all shared libraries installed by the recipe are located. For each shared library, the package that contains the shared library is registered as providing the shared library. More specifically, the package is registered as providing the soname of the library. The resulting shared-library-to-package mapping is saved globally in PKGDATA_DIR by the do_packagedata task.Simultaneously, all executables and shared libraries installed by the recipe are inspected to see what shared libraries they link against. For each shared library dependency that is found, PKGDATA_DIR is queried to see if some package (likely from a different recipe) contains the shared library. If such a package is found, a runtime dependency is added from the package that depends on the shared library to the package that contains the library.
The automatically added runtime dependency also includes a version restriction. This version restriction specifies that at least the current version of the package that provides the shared library must be used, as if “package (>= version)” had been added to RDEPENDS. This forces an upgrade of the package containing the shared library when installing the package that depends on the library, if needed.
If you want to avoid a package being registered as providing a particular shared library (e.g. because the library is for internal use only), then add the library to PRIVATE_LIBS inside the package’s recipe.
pcdeps
: During the do_package task of each recipe, all pkg-config modules (*.pc
files) installed by the recipe are located. For each module, the package that contains the module is registered as providing the module. The resulting module-to-package mapping is saved globally in PKGDATA_DIR by the do_packagedata task.Simultaneously, all pkg-config modules installed by the recipe are inspected to see what other pkg-config modules they depend on. A module is seen as depending on another module if it contains a “Requires:” line that specifies the other module. For each module dependency, PKGDATA_DIR is queried to see if some package contains the module. If such a package is found, a runtime dependency is added from the package that depends on the module to the package that contains the module.
Note
The pcdeps mechanism most often infers dependencies between -dev packages.
depchains
: If a packagefoo
depends on a packagebar
, thenfoo-dev
andfoo-dbg
are also made to depend onbar-dev
andbar-dbg
, respectively. Taking the-dev
packages as an example, thebar-dev
package might provide headers and shared library symlinks needed byfoo-dev
, which shows the need for a dependency between the packages.The dependencies added by
depchains
are in the form of RRECOMMENDS.Note
By default,
foo-dev
also has an RDEPENDS-style dependency onfoo
, because the default value ofRDEPENDS:${PN}-dev
(set inbitbake.conf
) includes “${PN}”.To ensure that the dependency chain is never broken,
-dev
and-dbg
packages are always generated by default, even if the packages turn out to be empty. See the ALLOW_EMPTY variable for more information.
The do_package task depends on the do_packagedata
task of each recipe in DEPENDS through use of a
[
deptask]
declaration, which guarantees that the required shared-library /
module-to-package mapping information will be available when needed as long as
DEPENDS has been correctly set.
4.7 Fakeroot and Pseudo
Some tasks are easier to implement when allowed to perform certain operations that are normally reserved for the root user (e.g. do_install, do_package_write*, do_rootfs, and do_image_*). For example, the do_install task benefits from being able to set the UID and GID of installed files to arbitrary values.
One approach to allowing tasks to perform root-only operations would be to require BitBake to run as root. However, this method is cumbersome and has security issues. The approach that is actually used is to run tasks that benefit from root privileges in a “fake” root environment. Within this environment, the task and its child processes believe that they are running as the root user, and see an internally consistent view of the filesystem. As long as generating the final output (e.g. a package or an image) does not require root privileges, the fact that some earlier steps ran in a fake root environment does not cause problems.
The capability to run tasks in a fake root environment is known as “fakeroot”, which is derived from the BitBake keyword/variable flag that requests a fake root environment for a task.
In the OpenEmbedded Build System, the program that implements
fakeroot is known as Pseudo. Pseudo
overrides system calls by using the environment variable LD_PRELOAD
,
which results in the illusion of running as root. To keep track of
“fake” file ownership and permissions resulting from operations that
require root permissions, Pseudo uses an SQLite 3 database. This
database is stored in
${
WORKDIR}/pseudo/files.db
for individual recipes. Storing the database in a file as opposed to in
memory gives persistence between tasks and builds, which is not
accomplished using fakeroot.
Note
If you add your own task that manipulates the same files or
directories as a fakeroot task, then that task also needs to run
under fakeroot. Otherwise, the task cannot run root-only operations,
and cannot see the fake file ownership and permissions set by the
other task. You need to also add a dependency on
virtual/fakeroot-native:do_populate_sysroot
, giving the following:
fakeroot do_mytask () {
...
}
do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot"
For more information, see the FAKEROOT* variables in the BitBake User Manual. You can also reference the “Why Not Fakeroot?” article for background information on Fakeroot and Pseudo.
4.8 BitBake Tasks Map
To understand how BitBake operates in the build directory and environment we can consider the following recipes and diagram, to have full picture about the tasks that BitBake runs to generate the final package file for the recipe.
We will have two recipes as an example:
libhello
: A recipe that provides a shared librarysayhello
: A recipe that useslibhello
library to do its job
Note
sayhello
depends on libhello
at compile time as it needs the shared
library to do the dynamic linking process. It also depends on it at runtime
as the shared library loader needs to find the library.
For more details about dependencies check Dependencies.
libhello
sources are as follows:
LICENSE
: This is the license associated with this libraryMakefile
: The file used bymake
to build the libraryhellolib.c
: The implementation of the libraryhellolib.h
: The C header of the library
sayhello
sources are as follows:
LICENSE
: This is the license associated with this projectMakefile
: The file used bymake
to build the projectsayhello.c
: The source file of the project
Before presenting the contents of each file, here are the steps
that we need to follow to accomplish what we want in the first place,
which is integrating sayhello
in our root file system:
Create a Git repository for each project with the corresponding files
Create a recipe for each project
Make sure that
sayhello
recipe DEPENDS onlibhello
Make sure that
sayhello
recipe RDEPENDS onlibhello
Add
sayhello
to IMAGE_INSTALL to integrate it into the root file system
The contents of libhello/Makefile
are:
LIB=libhello.so
all: $(LIB)
$(LIB): hellolib.o
$(CC) $< -Wl,-soname,$(LIB).1 -fPIC $(LDFLAGS) -shared -o $(LIB).1.0
%.o: %.c
$(CC) -c $<
clean:
rm -rf *.o *.so*
Note
When creating shared libraries, it is strongly recommended to follow the Linux conventions and guidelines (see this article for some background).
Note
When creating Makefile
files, it is strongly recommended to use CC
, LDFLAGS
and CFLAGS
as BitBake will set them as environment variables according
to your build configuration.
The contents of libhello/hellolib.h
are:
#ifndef HELLOLIB_H
#define HELLOLIB_H
void Hello();
#endif
The contents of libhello/hellolib.c
are:
#include <stdio.h>
void Hello(){
puts("Hello from a Yocto demo \n");
}
The contents of sayhello/Makefile
are:
EXEC=sayhello
LDFLAGS += -lhello
all: $(EXEC)
$(EXEC): sayhello.c
$(CC) $< $(LDFLAGS) $(CFLAGS) -o $(EXEC)
clean:
rm -rf $(EXEC) *.o
The contents of sayhello/sayhello.c
are:
#include <hellolib.h>
int main(){
Hello();
return 0;
}
The contents of libhello_0.1.bb
are:
SUMMARY = "Hello demo library"
DESCRIPTION = "Hello shared library used in Yocto demo"
# NOTE: Set the License according to the LICENSE file of your project
# and then add LIC_FILES_CHKSUM accordingly
LICENSE = "CLOSED"
# Assuming the branch is main
# Change <username> accordingly
SRC_URI = "git://github.com/<username>/libhello;branch=main;protocol=https"
S = "${WORKDIR}/git"
do_install(){
install -d ${D}${includedir}
install -d ${D}${libdir}
install hellolib.h ${D}${includedir}
oe_soinstall ${PN}.so.${PV} ${D}${libdir}
}
The contents of sayhello_0.1.bb
are:
SUMMARY = "SayHello demo"
DESCRIPTION = "SayHello project used in Yocto demo"
# NOTE: Set the License according to the LICENSE file of your project
# and then add LIC_FILES_CHKSUM accordingly
LICENSE = "CLOSED"
# Assuming the branch is main
# Change <username> accordingly
SRC_URI = "git://github.com/<username>/sayhello;branch=main;protocol=https"
DEPENDS += "libhello"
RDEPENDS:${PN} += "libhello"
S = "${WORKDIR}/git"
do_install(){
install -d ${D}/usr/bin
install -m 0700 sayhello ${D}/usr/bin
}
After placing the recipes in a custom layer we can run bitbake sayhello
to build the recipe.
The following diagram shows the sequences of tasks that BitBake executes to accomplish that.
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at docs@lists.yoctoproject.org
or
log into the Libera Chat #yocto
channel.
Yocto Project and OpenEmbedded Contributor Guide
The Yocto Project and OpenEmbedded are open-source, community-based projects so contributions are very welcome, it is how the code evolves and everyone can effect change. Contributions take different forms, if you have a fix for an issue you’ve run into, a patch is the most appropriate way to contribute it. If you run into an issue but don’t have a solution, opening a defect in Bugzilla or asking questions on the mailing lists might be more appropriate. This guide intends to point you in the right direction to this.
1 Identify the component
The Yocto Project and OpenEmbedded ecosystem is built of layers so the first step is to identify the component where the issue likely lies. For example, if you have a hardware issue, it is likely related to the BSP you are using and the best place to seek advice would be from the BSP provider or layer. If the issue is a build/configuration one and a distro is in use, they would likely be the first place to ask questions. If the issue is a generic one and/or in the core classes or metadata, the core layer or BitBake might be the appropriate component.
Each metadata layer being used should contain a README
file and that should
explain where to report issues, where to send changes and how to contact the
maintainers.
If the issue is in the core metadata layer (OpenEmbedded-Core) or in BitBake, issues can be reported in the Yocto Project Bugzilla. The yocto mailing list is a general “catch-all” location where questions can be sent if you can’t work out where something should go.
Poky is a commonly used “combination” repository where multiple
components have been combined (bitbake,
openembedded-core,
meta-yocto and
yocto-docs). Patches should be submitted against the
appropriate individual component rather than Poky itself as detailed in
the appropriate README
file.
2 Reporting a Defect Against the Yocto Project and OpenEmbedded
You can use the Yocto Project instance of Bugzilla to submit a defect (bug) against BitBake, OpenEmbedded-Core, against any other Yocto Project component or for tool issues. For additional information on this implementation of Bugzilla see the “Yocto Project Bugzilla” section in the Yocto Project Reference Manual. For more detail on any of the following steps, see the Yocto Project Bugzilla wiki page.
Use the following general steps to submit a bug:
Open the Yocto Project implementation of Bugzilla.
Click “File a Bug” to enter a new bug.
Choose the appropriate “Classification”, “Product”, and “Component” for which the bug was found. Bugs for the Yocto Project fall into one of several classifications, which in turn break down into several products and components. For example, for a bug against the
meta-intel
layer, you would choose “Build System, Metadata & Runtime”, “BSPs”, and “bsps-meta-intel”, respectively.Choose the “Version” of the Yocto Project for which you found the bug (e.g. 5.0.999).
Determine and select the “Severity” of the bug. The severity indicates how the bug impacted your work.
Choose the “Hardware” that the bug impacts.
Choose the “Architecture” that the bug impacts.
Choose a “Documentation change” item for the bug. Fixing a bug might or might not affect the Yocto Project documentation. If you are unsure of the impact to the documentation, select “Don’t Know”.
Provide a brief “Summary” of the bug. Try to limit your summary to just a line or two and be sure to capture the essence of the bug.
Provide a detailed “Description” of the bug. You should provide as much detail as you can about the context, behavior, output, and so forth that surrounds the bug. You can even attach supporting files for output from logs by using the “Add an attachment” button.
Click the “Submit Bug” button submit the bug. A new Bugzilla number is assigned to the bug and the defect is logged in the bug tracking system.
Once you file a bug, the bug is processed by the Yocto Project Bug Triage Team and further details concerning the bug are assigned (e.g. priority and owner). You are the “Submitter” of the bug and any further categorization, progress, or comments on the bug result in Bugzilla sending you an automated email concerning the particular change or progress to the bug.
There are no guarantees about if or when a bug might be worked on since an open-source project has no dedicated engineering resources. However, the project does have a good track record of resolving common issues over the medium and long term. We do encourage people to file bugs so issues are at least known about. It helps other users when they find somebody having the same issue as they do, and an issue that is unknown is much less likely to ever be fixed!
3 Recipe Style Guide
3.1 Recipe Naming Conventions
In general, most recipes should follow the naming convention
recipes-category/recipename/recipename_version.bb
. Recipes for related
projects may share the same recipe directory. recipename
and category
may contain hyphens, but hyphens are not allowed in version
.
If the recipe is tracking a Git revision that does not correspond to a released
version of the software, version
may be git
(e.g. recipename_git.bb
)
and the recipe would set PV.
3.2 Version Policy
Our versions follow the form <epoch>:<version>-<revision>
or in BitBake variable terms ${PE}:${PV}-${PR}. We
generally follow the Debian
version policy which defines these terms.
In most cases the version PV will be set automatically from the recipe file name. It is recommended to use released versions of software as these are revisions that upstream are expecting people to use.
Recipe versions should always compare and sort correctly so that upgrades work
as expected. With conventional versions such as 1.4
upgrading to 1.5
this happens naturally, but some versions don’t sort. For example,
1.5 Release Candidate 2
could be written as 1.5rc2
but this sorts after
1.5
, so upgrades from feeds won’t happen correctly.
Instead the tilde (~
) operator can be used, which sorts before the empty
string so 1.5~rc2
comes before 1.5
. There is a historical syntax which
may be found where PV is set as a combination of the prior version
+
the pre-release version, for example PV=1.4+1.5rc2
. This is a valid
syntax but the tilde form is preferred.
For version comparisons, the opkg-compare-versions
program from
opkg-utils
can be useful when attempting to determine how two version
numbers compare to each other. Our definitive version comparison algorithm is
the one within bitbake which aims to match those of the package managers and
Debian policy closely.
When a recipe references a git revision that does not correspond to a released version of software (e.g. is not a tagged version), the PV variable should include the Git revision using the following to make the version clear:
PV = "<version>+git${SRCPV}"
In this case, <version>
should be the most recently released version of the
software from the current source revision (git describe
can be useful for
determining this). Whilst not recommended for published layers, this format is
also useful when using AUTOREV to set the recipe to increment source
control revisions automatically, which can be useful during local development.
3.3 Version Number Changes
The PR variable is used to indicate different revisions of a recipe that reference the same upstream source version. It can be used to force a new version of a recipe to be installed onto a device from a package feed. These once had to be set manually but in most cases these can now be set and incremented automatically by a PR Server connected with a package feed.
When PV increases, any existing PR value can and should be removed.
If PV changes in such a way that it does not increase with respect to the previous value, you need to increase PE to ensure package managers will upgrade it correctly. If unset you should set PE to “1” since the default of empty is easily confused with “0” depending on the package manager. PE can only have an integer value.
3.4 Recipe formatting
3.4.1 Variable Formatting
Variable assignment should a space around each side of the operator, e.g.
FOO = "bar"
, notFOO="bar"
.Double quotes should be used on the right-hand side of the assignment, e.g.
FOO = "bar"
notFOO = 'bar'
Spaces should be used for indenting variables, with 4 spaces per tab
Long variables should be split over multiple lines when possible by using the continuation character (
\
)When splitting a long variable over multiple lines, all continuation lines should be indented (with spaces) to align with the start of the quote on the first line:
FOO = "this line is \ long \ "
Instead of:
FOO = "this line is \ long \ "
3.4.2 Python Function formatting
Spaces must be used for indenting Python code, with 4 spaces per tab
3.4.3 Shell Function formatting
The formatting of shell functions should be consistent within layers. Some use tabs, some use spaces.
3.5 Recipe metadata
3.5.1 Required Variables
The following variables should be included in all recipes:
SUMMARY: a one line description of the upstream project
DESCRIPTION: an extended description of the upstream project, possibly with multiple lines. If no reasonable description can be written, this may be omitted as it defaults to SUMMARY.
HOMEPAGE: the URL to the upstream projects homepage.
BUGTRACKER: the URL upstream projects bug tracking website, if applicable.
3.5.2 Recipe Ordering
When a variable is defined in recipes and classes, variables should follow the general order when possible:
inherit ...
Build class specific variables such as
EXTRA_QMAKEVARS_POST
and EXTRA_OECONFTasks such as do_configure
There are some cases where ordering is important and these cases would override this default order. Examples include:
PACKAGE_ARCH needing to be set before
inherit packagegroup
Tasks should be ordered based on the order they generally execute. For commonly used tasks this would be:
Custom tasks should be sorted similarly.
Package specific variables are typically grouped together, e.g.:
RDEPENDS:${PN} = “foo”
RDEPENDS:${PN}-libs = “bar”
RRECOMMENDS:${PN} = “one”
RRECOMMENDS:${PN}-libs = “two”
3.5.3 Recipe License Fields
Recipes need to define both the LICENSE and LIC_FILES_CHKSUM variables:
LICENSE: This variable specifies the license for the software. If you do not know the license under which the software you are building is distributed, you should go to the source code and look for that information. Typical files containing this information include
COPYING
, LICENSE, andREADME
files. You could also find the information near the top of a source file. For example, given a piece of software licensed under the GNU General Public License version 2, you would set LICENSE as follows:LICENSE = "GPL-2.0-only"
The licenses you specify within LICENSE can have any name as long as you do not use spaces, since spaces are used as separators between license names. For standard licenses, use the names of the files in
meta/files/common-licenses/
or the SPDXLICENSEMAP flag names defined inmeta/conf/licenses.conf
.LIC_FILES_CHKSUM: The OpenEmbedded build system uses this variable to make sure the license text has not changed. If it has, the build produces an error and it affords you the chance to figure it out and correct the problem.
You need to specify all applicable licensing files for the software. At the end of the configuration step, the build process will compare the checksums of the files to be sure the text has not changed. Any differences result in an error with the message containing the current checksum. For more explanation and examples of how to set the LIC_FILES_CHKSUM variable, see the “Tracking License Changes” section.
To determine the correct checksum string, you can list the appropriate files in the LIC_FILES_CHKSUM variable with incorrect md5 strings, attempt to build the software, and then note the resulting error messages that will report the correct md5 strings. See the “Fetching Code” section for additional information.
Here is an example that assumes the software has a
COPYING
file:LIC_FILES_CHKSUM = "file://COPYING;md5=xxx"
When you try to build the software, the build system will produce an error and give you the correct string that you can substitute into the recipe file for a subsequent build.
3.5.3.1 License Updates
When you change the LICENSE or LIC_FILES_CHKSUM in the recipe
you need to briefly explain the reason for the change via a License-Update:
tag. Often it’s quite trivial, such as:
License-Update: copyright years refreshed
Less often, the actual licensing terms themselves will have changed. If so, do try to link to upstream making/justifying that decision.
3.5.4 Tips and Guidelines for Writing Recipes
Use BBCLASSEXTEND instead of creating separate recipes such as
-native
and-nativesdk
ones, whenever possible. This avoids having to maintain multiple recipe files at the same time.Recipes should have tasks which are idempotent, i.e. that executing a given task multiple times shouldn’t change the end result. The build environment is built upon this assumption and breaking it can cause obscure build failures.
For idempotence when modifying files in tasks, it is usually best to:
copy a file
X
toX.orig
(only if it doesn’t exist already)then, copy
X.orig
back toX
,and, finally, modify
X
.
This ensures if rerun the task always has the same end result and the original file can be preserved to reuse. It also guards against an interrupted build corrupting the file.
3.6 Patch Upstream Status
In order to keep track of patches applied by recipes and ultimately reduce the number of patches that need maintaining, the OpenEmbedded build system requires information about the upstream status of each patch.
In its description, each patch should provide detailed information about the bug that it addresses, such as the URL in a bug tracking system and links to relevant mailing list archives.
Then, you should also add an Upstream-Status:
tag containing one of the
following status strings:
Pending
No determination has been made yet, or patch has not yet been submitted to upstream.
Keep in mind that every patch submitted upstream reduces the maintainance burden in OpenEmbedded and Yocto Project in the long run, so this patch status should only be used in exceptional cases if there are genuine obstacles to submitting a patch upstream; the reason for that should be included in the patch.
Submitted [where]
Submitted to upstream, waiting for approval. Optionally include where it was submitted, such as the author, mailing list, etc.
Backport [version]
Accepted upstream and included in the next release, or backported from newer upstream version, because we are at a fixed version. Include upstream version info (e.g. commit ID or next expected version).
Denied
Not accepted by upstream, include reason in patch.
Inactive-Upstream [lastcommit: when (and/or) lastrelease: when]
The upstream is no longer available. This typically means a defunct project where no activity has happened for a long time — measured in years. To make that judgement, it is recommended to look at not only when the last release happened, but also when the last commit happened, and whether newly made bug reports and merge requests since that time receive no reaction. It is also recommended to add to the patch description any relevant links where the inactivity can be clearly seen.
Inappropriate [reason]
The patch is not appropriate for upstream, include a brief reason on the same line enclosed with
[]
. In the past, there were several different reasons not to submit patches upstream, but we have to consider that every non-upstreamed patch means a maintainance burden for recipe maintainers. Currently, the only reasons to mark patches as inappropriate for upstream submission are:oe specific
: the issue is specific to how OpenEmbedded performs builds or sets things up at runtime, and can be resolved only with a patch that is not however relevant or appropriate for general upstream submission.upstream ticket <link>
: the issue is not specific to Open-Embedded and should be fixed upstream, but the patch in its current form is not suitable for merging upstream, and the author lacks sufficient expertise to develop a proper patch. Instead the issue is handled via a bug report (include link).
Of course, if another person later takes care of submitting this patch upstream,
the status should be changed to Submitted [where]
, and an additional
Signed-off-by:
line should be added to the patch by the person claiming
responsibility for upstreaming.
3.6.1 Examples
Here’s an example of a patch that has been submitted upstream:
rpm: Adjusted the foo setting in bar
[RPM Ticket #65] -- http://rpm5.org/cvs/tktview?tn=65,5
The foo setting in bar was decreased from X to X-50% in order to
ensure we don't exhaust all system memory with foobar threads.
Upstream-Status: Submitted [rpm5-devel@rpm5.org]
Signed-off-by: Joe Developer <joe.developer@example.com>
A future update can change the value to Backport
or Denied
as
appropriate.
Another example of a patch that is specific to OpenEmbedded:
Do not treat warnings as errors
There are additional warnings found with musl which are
treated as errors and fails the build, we have more combinations
than upstream supports to handle.
Upstream-Status: Inappropriate [oe specific]
Here’s a patch that has been backported from an upstream commit:
include missing sys/file.h for LOCK_EX
Upstream-Status: Backport [https://github.com/systemd/systemd/commit/ac8db36cbc26694ee94beecc8dca208ec4b5fd45]
3.7 CVE patches
In order to have a better control of vulnerabilities, patches that fix CVEs must
contain a CVE:
tag. This tag list all CVEs fixed by the patch. If more than
one CVE is fixed, separate them using spaces.
3.7.1 CVE Examples
This should be the header of patch that fixes CVE-2015-8370 in GRUB2:
grub2: Fix CVE-2015-8370
[No upstream tracking] -- https://bugzilla.redhat.com/show_bug.cgi?id=1286966
Back to 28; Grub2 Authentication
Two functions suffer from integer underflow fault; the grub_username_get() and grub_password_get()located in
grub-core/normal/auth.c and lib/crypto.c respectively. This can be exploited to obtain a Grub rescue shell.
Upstream-Status: Backport [http://git.savannah.gnu.org/cgit/grub.git/commit/?id=451d80e52d851432e109771bb8febafca7a5f1f2]
CVE: CVE-2015-8370
Signed-off-by: Joe Developer <joe.developer@example.com>
4 Contributing Changes to a Component
Contributions to the Yocto Project and OpenEmbedded are very welcome. Because the system is extremely configurable and flexible, we recognize that developers will want to extend, configure or optimize it for their specific uses.
4.1 Contributing through mailing lists — Why not using web-based workflows?
Both Yocto Project and OpenEmbedded have many key components that are maintained by patches being submitted on mailing lists. We appreciate this approach does look a little old fashioned when other workflows are available through web technology such as GitHub, GitLab and others. Since we are often asked this question, we’ve decided to document the reasons for using mailing lists.
One significant factor is that we value peer review. When a change is proposed to many of the core pieces of the project, it helps to have many eyes of review go over them. Whilst there is ultimately one maintainer who needs to make the final call on accepting or rejecting a patch, the review is made by many eyes and the exact people reviewing it are likely unknown to the maintainer. It is often the surprise reviewer that catches the most interesting issues!
This is in contrast to the “GitHub” style workflow where either just a maintainer makes that review, or review is specifically requested from nominated people. We believe there is significant value added to the codebase by this peer review and that moving away from mailing lists would be to the detriment of our code.
We also need to acknowledge that many of our developers are used to this mailing list workflow and have worked with it for years, with tools and processes built around it. Changing away from this would result in a loss of key people from the project, which would again be to its detriment.
The projects are acutely aware that potential new contributors find the
mailing list approach off-putting and would prefer a web-based GUI.
Since we don’t believe that can work for us, the project is aiming to ensure
patchwork is available to help track
patch status and also looking at how tooling can provide more feedback to users
about patch status. We are looking at improving tools such as patchtest
to
test user contributions before they hit the mailing lists and also at better
documenting how to use such workflows since we recognise that whilst this was
common knowledge a decade ago, it might not be as familiar now.
4.2 Preparing Changes for Submission
4.2.1 Set up Git
The first thing to do is to install Git packages. Here is an example on Debian and Ubuntu:
sudo apt install git-core git-email
Then, you need to set a name and e-mail address that Git will use to identify your commits:
git config --global user.name "Ada Lovelace"
git config --global user.email "ada.lovelace@gmail.com"
4.2.2 Clone the Git repository for the component to modify
After identifying the component to modify as described in the “Identify the component” section, clone the corresponding Git repository. Here is an example for OpenEmbedded-Core:
git clone https://git.openembedded.org/openembedded-core
cd openembedded-core
4.2.3 Create a new branch
Then, create a new branch in your local Git repository
for your changes, starting from the reference branch in the upstream
repository (often called master
):
$ git checkout <ref-branch>
$ git checkout -b my-changes
If you have completely unrelated sets of changes to submit, you should even create one branch for each set.
4.2.4 Implement and commit changes
In each branch, you should group your changes into small, controlled and isolated ones. Keeping changes small and isolated aids review, makes merging/rebasing easier and keeps the change history clean should anyone need to refer to it in future.
To this purpose, you should create one Git commit per change, corresponding to each of the patches you will eventually submit. See further guidance in the Linux kernel documentation if needed.
For example, when you intend to add multiple new recipes, each recipe should be added in a separate commit. For upgrades to existing recipes, the previous version should usually be deleted as part of the same commit to add the upgraded version.
Stage Your Changes: Stage your changes by using the
git add
command on each file you modified. If you want to stage all the files you modified, you can even use thegit add -A
command.Commit Your Changes: This is when you can create separate commits. For each commit to create, use the
git commit -s
command with the files or directories you want to include in the commit:$ git commit -s file1 file2 dir1 dir2 ...
To include all staged files:
$ git commit -sa
The
-s
option ofgit commit
adds a “Signed-off-by:” line to your commit message. There is the same requirement for contributing to the Linux kernel. Adding such a line signifies that you, the submitter, have agreed to the Developer’s Certificate of Origin 1.1 as follows:Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
Provide a single-line summary of the change and, if more explanation is needed, provide more detail in the body of the commit. This summary is typically viewable in the “shortlist” of changes. Thus, providing something short and descriptive that gives the reader a summary of the change is useful when viewing a list of many commits. You should prefix this short description with the recipe name (if changing a recipe), or else with the short form path to the file being changed.
Note
To find a suitable prefix for the commit summary, a good idea is to look for prefixes used in previous commits touching the same files or directories:
git log --oneline <paths>
For the body of the commit message, provide detailed information that describes what you changed, why you made the change, and the approach you used. It might also be helpful if you mention how you tested the change. Provide as much detail as you can in the body of the commit message.
Note
If the single line summary is enough to describe a simple change, the body of the commit message can be left empty.
If the change addresses a specific bug or issue that is associated with a bug-tracking ID, include a reference to that ID in your detailed description. For example, the Yocto Project uses a specific convention for bug references — any commit that addresses a specific bug should use the following form for the detailed description. Be sure to use the actual bug-tracking ID from Bugzilla for bug-id:
Fixes [YOCTO #bug-id] detailed description of change
Crediting contributors: By using the
git commit --amend
command, you can add some tags to the commit description to credit other contributors to the change:Reported-by
: name and email of a person reporting a bug that your commit is trying to fix. This is a good practice to encourage people to go on reporting bugs and let them know that their reports are taken into account.Suggested-by
: name and email of a person to credit for the idea of making the change.Tested-by
,Reviewed-by
: name and email for people having tested your changes or reviewed their code. These fields are usually added by the maintainer accepting a patch, or by yourself if you submitted your patches to early reviewers, or are submitting an unmodified patch again as part of a new iteration of your patch series.CC:
Name and email of people you want to send a copy of your changes to. This field will be used bygit send-email
.
See more guidance about using such tags in the Linux kernel documentation.
4.2.5 Test your changes
For each contributions you make, you should test your changes as well. For this the Yocto Project offers several types of tests. Those tests cover different areas and it depends on your changes which are feasible. For example run:
For changes that affect the build environment:
bitbake-selftest
: for changes within BitBake
oe-selftest
: to test combinations of BitBake runs
oe-build-perf-test
: to test the performance of common build scenariosFor changes in a recipe:
ptest
: run package specific tests, if they exist
testimage
: build an image, boot it and run testcases on itIf applicable, ensure also the
native
andnativesdk
variants buildsFor changes relating to the SDK:
testsdk
: to build, install and run tests against a SDK
testsdk_ext
: to build, install and run tests against an extended SDK
Note that this list just gives suggestions and is not exhaustive. More details can be found here: Yocto Project Tests — Types of Testing Overview.
4.3 Creating Patches
Here is the general procedure on how to create patches to be sent through email:
Describe the Changes in your Branch: If you have more than one commit in your branch, it’s recommended to provide a cover letter describing the series of patches you are about to send.
For this purpose, a good solution is to store the cover letter contents in the branch itself:
git branch --edit-description
This will open a text editor to fill in the description for your changes. This description can be updated when necessary and will be used by Git to create the cover letter together with the patches.
It is recommended to start this description with a title line which will serve a the subject line for the cover letter.
Generate Patches for your Branch: The
git format-patch
command will generate patch files for each of the commits in your branch. You need to pass the reference branch your branch starts from.If you branch didn’t need a description in the previous step:
$ git format-patch <ref-branch>
If you filled a description for your branch, you will want to generate a cover letter too:
$ git format-patch --cover-letter --cover-from-description=auto <ref-branch>
After the command is run, the current directory contains numbered
.patch
files for the commits in your branch. If you have a cover letter, it will be in the0000-cover-letter.patch
.Note
The
--cover-from-description=auto
option makesgit format-patch
use the first paragraph of the branch description as the cover letter title. Another possibility, which is easier to remember, is to pass only the--cover-letter
option, but you will have to edit the subject line manually every time you generate the patches.See the git format-patch manual page for details.
Review each of the Patch Files: This final review of the patches before sending them often allows to view your changes from a different perspective and discover defects such as typos, spacing issues or lines or even files that you didn’t intend to modify. This review should include the cover letter patch too.
If necessary, rework your commits as described in “Taking Patch Review into Account”.
4.4 Validating Patches with Patchtest
patchtest
is available in openembedded-core
as a tool for making
sure that your patches are well-formatted and contain important info for
maintenance purposes, such as Signed-off-by
and Upstream-Status
tags. Note that no functional testing of the changes will be performed by patchtest
.
Currently, it only supports testing patches for openembedded-core
branches.
To setup, perform the following:
pip install -r meta/lib/patchtest/requirements.txt
source oe-init-build-env
bitbake-layers add-layer ../meta-selftest
Once these steps are complete and you have generated your patch files,
you can run patchtest
like so:
patchtest --patch <patch_name>
Alternatively, if you want patchtest
to iterate over and test
multiple patches stored in a directory, you can use:
patchtest --directory <directory_name>
By default, patchtest
uses its own modules’ file paths to determine what
repository and test suite to check patches against. If you wish to test
patches against a repository other than openembedded-core
and/or use
a different set of tests, you can use the --repodir
and --testdir
flags:
patchtest --patch <patch_name> --repodir <path/to/repo> --testdir <path/to/testdir>
Finally, note that patchtest
is designed to test patches in a standalone
way, so if your patches are meant to apply on top of changes made by
previous patches in a series, it is possible that patchtest
will report
false failures regarding the “merge on head” test.
Using patchtest
in this manner provides a final check for the overall
quality of your changes before they are submitted for review by the
maintainers.
4.5 Sending the Patches via Email
4.5.1 Using Git to Send Patches
To submit patches through email, it is very important that you send them
without any whitespace or HTML formatting that either you or your mailer
introduces. The maintainer that receives your patches needs to be able
to save and apply them directly from your emails, using the git am
command.
Using the git send-email
command is the only error-proof way of sending
your patches using email since there is no risk of compromising whitespace
in the body of the message, which can occur when you use your own mail
client. It will also properly include your patches as inline attachments,
which is not easy to do with standard e-mail clients without breaking lines.
If you used your regular e-mail client and shared your patches as regular
attachments, reviewers wouldn’t be able to quote specific sections of your
changes and make comments about them.
4.5.2 Setting up Git to Send Email
The git send-email
command can send email by using a local or remote
Mail Transport Agent (MTA) such as msmtp
, sendmail
, or
through a direct SMTP configuration in your Git ~/.gitconfig
file.
Here are the settings for letting git send-email
send e-mail through your
regular STMP server, using a Google Mail account as an example:
git config --global sendemail.smtpserver smtp.gmail.com
git config --global sendemail.smtpserverport 587
git config --global sendemail.smtpencryption tls
git config --global sendemail.smtpuser ada.lovelace@gmail.com
git config --global sendemail.smtppass = XXXXXXXX
These settings will appear in the .gitconfig
file in your home directory.
If you neither can use a local MTA nor SMTP, make sure you use an email client that does not touch the message (turning spaces in tabs, wrapping lines, etc.). A good mail client to do so is Pine (or Alpine) or Mutt. For more information about suitable clients, see Email clients info for Linux in the Linux kernel sources.
If you use such clients, just include the patch in the body of your email.
4.5.3 Finding a Suitable Mailing List
You should send patches to the appropriate mailing list so that they can be reviewed by the right contributors and merged by the appropriate maintainer. The specific mailing list you need to use depends on the location of the code you are changing.
If people have concerns with any of the patches, they will usually voice their concern over the mailing list. If patches do not receive any negative reviews, the maintainer of the affected layer typically takes them, tests them, and then based on successful testing, merges them.
In general, each component (e.g. layer) should have a README
file
that indicates where to send the changes and which process to follow.
The “poky” repository, which is the Yocto Project’s reference build environment, is a hybrid repository that contains several individual pieces (e.g. BitBake, Metadata, documentation, and so forth) built using the combo-layer tool. The upstream location used for submitting changes varies by component:
Core Metadata: Send your patches to the openembedded-core mailing list. For example, a change to anything under the
meta
orscripts
directories should be sent to this mailing list.BitBake: For changes to BitBake (i.e. anything under the
bitbake
directory), send your patches to the bitbake-devel mailing list.meta-poky and meta-yocto-bsp trees: These trees contain Metadata. Use the poky mailing list.
Documentation: For changes to the Yocto Project documentation, use the docs mailing list.
For changes to other layers and tools hosted in the Yocto Project source repositories (i.e. git.yoctoproject.org), use the yocto-patches general mailing list.
For changes to other layers hosted in the OpenEmbedded source
repositories (i.e. git.openembedded.org), use
the openembedded-devel
mailing list, unless specified otherwise in the layer’s README
file.
If you intend to submit a new recipe that neither fits into the core Metadata, nor into meta-openembedded, you should look for a suitable layer in https://layers.openembedded.org. If similar recipes can be expected, you may consider Creating Your Own Layer.
If in doubt, please ask on the yocto general mailing list or on the openembedded-devel mailing list.
4.5.4 Subscribing to the Mailing List
After identifying the right mailing list to use, you will have to subscribe to it if you haven’t done it yet.
If you attempt to send patches to a list you haven’t subscribed to, your email will be returned as undelivered.
However, if you don’t want to be receive all the messages sent to a mailing list, you can set your subscription to “no email”. You will still be a subscriber able to send messages, but you won’t receive any e-mail. If people reply to your message, their e-mail clients will default to including your email address in the conversation anyway.
Anyway, you’ll also be able to access the new messages on mailing list archives, either through a web browser, or for the lists archived on https://lore.kernel.org, through an individual newsgroup feed or a git repository.
4.5.5 Sending Patches via Email
At this stage, you are ready to send your patches via email. Here’s the
typical usage of git send-email
:
git send-email --to <mailing-list-address> *.patch
Then, review each subject line and list of recipients carefully, and then and then allow the command to send each message.
You will see that git send-email
will automatically copy the people listed
in any commit tags such as Signed-off-by
or Reported-by
.
In case you are sending patches for meta-openembedded or any layer other than openembedded-core, please add the appropriate prefix so that it is clear which layer the patch is intended to be applied to:
git format-patch --subject-prefix="meta-oe][PATCH" ...
Note
It is actually possible to send patches without generating them
first. However, make sure you have reviewed your changes carefully
because git send-email
will just show you the title lines of
each patch.
Here’s a command you can use if you just have one patch in your branch:
git send-email --to <mailing-list-address> -1
If you have multiple patches and a cover letter, you can send patches for all the commits between the reference branch and the tip of your branch:
git send-email --cover-letter --cover-from-description=auto --to <mailing-list-address> -M <ref-branch>
See the git send-email manual page for details.
4.5.6 Troubleshooting Email Issues
4.5.6.1 Fixing your From identity
We have a frequent issue with contributors whose patches are received through
a From
field which doesn’t match the Signed-off-by
information. Here is
a typical example for people sending from a domain name with https://en.wikipedia.org/wiki/DMARC:
From: "Linus Torvalds via lists.openembedded.org <linus.torvalds=kernel.org@lists.openembedded.org>"
This From
field is used by git am
to recreate commits with the right
author name. The following will ensure that your e-mails have an additional
From
field at the beginning of the Email body, and therefore that
maintainers accepting your patches don’t have to fix commit author information
manually:
git config --global sendemail.from "linus.torvalds@kernel.org"
The sendemail.from
should match your user.email
setting,
which appears in the Signed-off-by
line of your commits.
4.5.7 Streamlining git send-email usage
If you want to save time and not be forced to remember the right options to use
with git send-email
, you can use Git configuration settings.
To set the right mailing list address for a given repository:
git config --local sendemail.to openembedded-devel@lists.openembedded.org
If the mailing list requires a subject prefix for the layer (this only works when the repository only contains one layer):
git config --local format.subjectprefix "meta-something][PATCH"
4.6 Using Scripts to Push a Change Upstream and Request a Pull
For larger patch series it is preferable to send a pull request which not
only includes the patch but also a pointer to a branch that can be pulled
from. This involves making a local branch for your changes, pushing this
branch to an accessible repository and then using the create-pull-request
and send-pull-request
scripts from openembedded-core to create and send a
patch series with a link to the branch for review.
Follow this procedure to push a change to an upstream “contrib” Git repository once the steps in “Preparing Changes for Submission” have been followed:
Note
You can find general Git information on how to push a change upstream in the Git Community Book.
Request Push Access to an “Upstream” Contrib Repository: Send an email to
helpdesk@yoctoproject.org
:Attach your SSH public key which usually named
id_rsa.pub.
. If you don’t have one generate it by runningssh-keygen -t rsa -b 4096 -C "your_email@example.com"
.List the repositories you’re planning to contribute to.
Include your preferred branch prefix for
-contrib
repositories.
Push Your Commits to the “Contrib” Upstream: Push your changes to that repository:
$ git push upstream_remote_repo local_branch_name
For example, suppose you have permissions to push into the upstream
meta-intel-contrib
repository and you are working in a local branch named your_name/README
. The following command pushes your local commits to themeta-intel-contrib
upstream repository and puts the commit in a branch named your_name/README
:$ git push meta-intel-contrib your_name/README
Determine Who to Notify: Determine the maintainer or the mailing list that you need to notify for the change.
Before submitting any change, you need to be sure who the maintainer is or what mailing list that you need to notify. Use either these methods to find out:
Maintenance File: Examine the
maintainers.inc
file, which is located in the Source Directory atmeta/conf/distro/include
, to see who is responsible for code.Search by File: Using Git, you can enter the following command to bring up a short list of all commits against a specific file:
git shortlog -- filename
Just provide the name of the file for which you are interested. The information returned is not ordered by history but does include a list of everyone who has committed grouped by name. From the list, you can see who is responsible for the bulk of the changes against the file.
Find the Mailing List to Use: See the “Finding a Suitable Mailing List” section above.
Make a Pull Request: Notify the maintainer or the mailing list that you have pushed a change by making a pull request.
The Yocto Project provides two scripts that conveniently let you generate and send pull requests to the Yocto Project. These scripts are
create-pull-request
andsend-pull-request
. You can find these scripts in thescripts
directory within the Source Directory (e.g.poky/scripts
).Using these scripts correctly formats the requests without introducing any whitespace or HTML formatting. The maintainer that receives your patches either directly or through the mailing list needs to be able to save and apply them directly from your emails. Using these scripts is the preferred method for sending patches.
First, create the pull request. For example, the following command runs the script, specifies the upstream repository in the contrib directory into which you pushed the change, and provides a subject line in the created patch files:
$ poky/scripts/create-pull-request -u meta-intel-contrib -s "Updated Manual Section Reference in README"
Running this script forms
*.patch
files in a folder namedpull-
PID in the current directory. One of the patch files is a cover letter.Before running the
send-pull-request
script, you must edit the cover letter patch to insert information about your change. After editing the cover letter, send the pull request. For example, the following command runs the script and specifies the patch directory and email address. In this example, the email address is a mailing list:$ poky/scripts/send-pull-request -p ~/meta-intel/pull-10565 -t meta-intel@lists.yoctoproject.org
You need to follow the prompts as the script is interactive.
Note
For help on using these scripts, simply provide the
-h
argument as follows:$ poky/scripts/create-pull-request -h $ poky/scripts/send-pull-request -h
4.7 Submitting Changes to Stable Release Branches
The process for proposing changes to a Yocto Project stable branch differs from the steps described above. Changes to a stable branch must address identified bugs or CVEs and should be made carefully in order to avoid the risk of introducing new bugs or breaking backwards compatibility. Typically bug fixes must already be accepted into the master branch before they can be backported to a stable branch unless the bug in question does not affect the master branch or the fix on the master branch is unsuitable for backporting.
The list of stable branches along with the status and maintainer for each branch can be obtained from the Releases wiki page.
Note
Changes will not typically be accepted for branches which are marked as End-Of-Life (EOL).
With this in mind, the steps to submit a change for a stable branch are as follows:
Identify the bug or CVE to be fixed: This information should be collected so that it can be included in your submission.
See Checking for Vulnerabilities for details about CVE tracking.
Check if the fix is already present in the master branch: This will result in the most straightforward path into the stable branch for the fix.
If the fix is present in the master branch — submit a backport request by email: You should send an email to the relevant stable branch maintainer and the mailing list with details of the bug or CVE to be fixed, the commit hash on the master branch that fixes the issue and the stable branches which you would like this fix to be backported to.
If the fix is not present in the master branch — submit the fix to the master branch first: This will ensure that the fix passes through the project’s usual patch review and test processes before being accepted. It will also ensure that bugs are not left unresolved in the master branch itself. Once the fix is accepted in the master branch a backport request can be submitted as above.
If the fix is unsuitable for the master branch — submit a patch directly for the stable branch: This method should be considered as a last resort. It is typically necessary when the master branch is using a newer version of the software which includes an upstream fix for the issue or when the issue has been fixed on the master branch in a way that introduces backwards incompatible changes. In this case follow the steps in “Preparing Changes for Submission” and in the following sections but modify the subject header of your patch email to include the name of the stable branch which you are targetting. This can be done using the
--subject-prefix
argument togit format-patch
, for example to submit a patch to the “nanbield” branch use:git format-patch --subject-prefix='nanbield][PATCH' ...
4.8 Taking Patch Review into Account
You may get feedback on your submitted patches from other community members or from the automated patchtest service. If issues are identified in your patches then it is usually necessary to address these before the patches are accepted into the project. In this case you should your commits according to the feedback and submit an updated version to the relevant mailing list.
In any case, never fix reported issues by fixing them in new commits on the tip of your branch. Always come up with a new series of commits without the reported issues.
Note
It is a good idea to send a copy to the reviewers who provided feedback
to the previous version of the patch. You can make sure this happens
by adding a CC
tag to the commit description:
CC: William Shakespeare <bill@yoctoproject.org>
A single patch can be amended using git commit --amend
, and multiple
patches can be easily reworked and reordered through an interactive Git rebase:
git rebase -i <ref-branch>
See this tutorial for practical guidance about using Git interactive rebasing.
You should also modify the [PATCH]
tag in the email subject line when
sending the revised patch to mark the new iteration as [PATCH v2]
,
[PATCH v3]
, etc as appropriate. This can be done by passing the -v
argument to git format-patch
with a version number:
git format-patch -v2 <ref-branch>
Lastly please ensure that you also test your revised changes. In particular
please don’t just edit the patch file written out by git format-patch
and
resend it.
4.9 Tracking the Status of Patches
The Yocto Project uses a Patchwork instance
to track the status of patches submitted to the various mailing lists and to
support automated patch testing. Each submitted patch is checked for common
mistakes and deviations from the expected patch format and submitters are
notified by patchtest
if such mistakes are found. This process helps to
reduce the burden of patch review on maintainers.
Note
This system is imperfect and changes can sometimes get lost in the flow. Asking about the status of a patch or change is reasonable if the change has been idle for a while with no feedback.
If your patches have not had any feedback in a few days, they may have already
been merged. You can run git pull
branch to check this. Note that many if
not most layer maintainers do not send out acknowledgement emails when they
accept patches. Alternatively, if there is no response or merge after a few days
the patch may have been missed or the appropriate reviewers may not currently be
around. It is then perfectly fine to reply to it yourself with a reminder asking
for feedback.
Note
Patch reviews for feature and recipe upgrade patches are likely be delayed during a feature freeze because these types of patches aren’t merged during at that time — you may have to wait until after the freeze is lifted.
Maintainers also commonly use -next
branches to test submissions prior to
merging patches. Thus, you can get an idea of the status of a patch based on
whether the patch has been merged into one of these branches. The commonly
used testing branches for OpenEmbedded-Core are as follows:
openembedded-core “master-next” branch: This branch is part of the openembedded-core repository and contains proposed changes to the core metadata.
poky “master-next” branch: This branch is part of the poky repository and combines proposed changes to BitBake, the core metadata and the poky distro.
Similarly, stable branches maintained by the project may have corresponding
-next
branches which collect proposed changes. For example,
scarthgap-next
and nanbield-next
branches in both the “openembdedded-core” and “poky” repositories.
Other layers may have similar testing branches but there is no formal requirement or standard for these so please check the documentation for the layers you are contributing to.
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
To report any inaccuracies or problems with this (or any other Yocto Project)
manual, or to send additions or changes, please send email/patches to the Yocto
Project documentation mailing list at docs@lists.yoctoproject.org
or
log into the Libera Chat #yocto
channel.
Yocto Project Reference Manual
1 System Requirements
Welcome to the Yocto Project Reference Manual. This manual provides reference information for the current release of the Yocto Project, and is most effectively used after you have an understanding of the basics of the Yocto Project. The manual is neither meant to be read as a starting point to the Yocto Project, nor read from start to finish. Rather, use this manual to find variable definitions, class descriptions, and so forth as needed during the course of using the Yocto Project.
For introductory information on the Yocto Project, see the Yocto Project Website and the “The Yocto Project Development Environment” chapter in the Yocto Project Overview and Concepts Manual.
If you want to use the Yocto Project to quickly build an image without having to understand concepts, work through the Yocto Project Quick Build document. You can find “how-to” information in the Yocto Project Development Tasks Manual. You can find Yocto Project overview and conceptual information in the Yocto Project Overview and Concepts Manual.
Note
For more information about the Yocto Project Documentation set, see the Links and Related Documentation section.
1.1 Minimum Free Disk Space
To build an image such as core-image-sato
for the qemux86-64
machine,
you need a system with at least 90 Gbytes of free disk space.
However, much more disk space will be necessary to build more complex images,
to run multiple builds and to cache build artifacts, improving build efficiency.
If you have a shortage of disk space, see the “Conserving Disk Space” section of the Development Tasks Manual.
1.2 Minimum System RAM
You will manage to build an image such as core-image-sato
for the
qemux86-64
machine with as little as 8 Gbytes of RAM on an old
system with 4 CPU cores, but your builds will be much faster on a system with
as much RAM and as many CPU cores as possible.
1.3 Supported Linux Distributions
Currently, the 5.0.999 release (“Scarthgap”) of the Yocto Project is supported on the following distributions:
Ubuntu 20.04 (LTS)
Ubuntu 22.04 (LTS)
Fedora 38
CentOS Stream 8
Debian GNU/Linux 11 (Bullseye)
Debian GNU/Linux 12 (Bookworm)
OpenSUSE Leap 15.4
AlmaLinux 8
AlmaLinux 9
Rocky 9
The following distribution versions are still tested, even though the organizations publishing them no longer make updates publicly available:
Ubuntu 18.04 (LTS)
Ubuntu 23.04
Note that the Yocto Project doesn’t have access to private updates that some of these versions may have. Therefore, our testing has limited value if you have access to such updates.
Finally, here are the distribution versions which were previously tested on former revisions of “Scarthgap”, but no longer are:
This list is currently empty
Note
While the Yocto Project Team attempts to ensure all Yocto Project releases are one hundred percent compatible with each officially supported Linux distribution, you may still encounter problems that happen only with a specific distribution.
Yocto Project releases are tested against the stable Linux distributions in the above list. The Yocto Project should work on other distributions but validation is not performed against them.
In particular, the Yocto Project does not support and currently has no plans to support rolling-releases or development distributions due to their constantly changing nature. We welcome patches and bug reports, but keep in mind that our priority is on the supported platforms listed above.
If your Linux distribution is not in the above list, we recommend to get the buildtools or buildtools-extended tarballs containing the host tools required by your Yocto Project release, typically by running
scripts/install-buildtools
as explained in the “Required Git, tar, Python, make and gcc Versions” section.You may use Windows Subsystem For Linux v2 to set up a build host using Windows 10 or later, or Windows Server 2019 or later, but validation is not performed against build hosts using WSL 2.
See the Setting Up to Use Windows Subsystem For Linux (WSL 2) section in the Yocto Project Development Tasks Manual for more information.
If you encounter problems, please go to Yocto Project Bugzilla and submit a bug. We are interested in hearing about your experience. For information on how to submit a bug, see the Yocto Project Bugzilla wiki page and the “Reporting a Defect Against the Yocto Project and OpenEmbedded” section in the Yocto Project and OpenEmbedded Contributor Guide.
1.4 Required Packages for the Build Host
The list of packages you need on the host development system can be large when covering all build scenarios using the Yocto Project. This section describes required packages according to Linux distribution and function.
1.4.1 Ubuntu and Debian
Here are the packages needed to build an image on a headless system with a supported Ubuntu or Debian Linux distribution:
$ sudo apt install gawk wget git diffstat unzip texinfo gcc build-essential chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 libegl1-mesa libsdl1.2-dev python3-subunit mesa-common-dev zstd liblz4-tool file locales libacl1
$ sudo locale-gen en_US.UTF-8
Note
If your build system has the
oss4-dev
package installed, you might experience QEMU build failures due to the package installing its own custom/usr/include/linux/soundcard.h
on the Debian system. If you run into this situation, try either of these solutions:$ sudo apt build-dep qemu $ sudo apt remove oss4-dev
Here are the packages needed to build Project documentation manuals:
$ sudo apt install git make inkscape texlive-latex-extra
$ sudo apt install sphinx python3-saneyaml python3-sphinx-rtd-theme
1.4.2 Fedora Packages
Here are the packages needed to build an image on a headless system with a supported Fedora Linux distribution:
$ sudo dnf install gawk make wget tar bzip2 gzip python3 unzip perl patch diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath ccache perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue perl-bignum socat python3-pexpect findutils which file cpio python python3-pip xz python3-GitPython python3-jinja2 SDL-devel rpcgen mesa-libGL-devel perl-FindBin perl-File-Compare perl-File-Copy perl-locale zstd lz4 hostname glibc-langpack-en libacl
Here are the packages needed to build Project documentation manuals:
$ sudo dnf install git make python3-pip which inkscape texlive-fncychap
$ sudo pip3 install sphinx sphinx_rtd_theme pyyaml
1.4.3 openSUSE Packages
Here are the packages needed to build an image on a headless system with a supported openSUSE distribution:
$ sudo zypper install python gcc gcc-c++ git chrpath make wget python-xml diffstat makeinfo python-curses patch socat python3 python3-curses tar python3-pip python3-pexpect xz which python3-Jinja2 Mesa-libEGL1 libSDL-devel rpcgen Mesa-dri-devel zstd lz4 bzip2 gzip hostname libacl1
$ sudo pip3 install GitPython
Here are the packages needed to build Project documentation manuals:
$ sudo zypper install git make python3-pip which inkscape texlive-fncychap
$ sudo pip3 install sphinx sphinx_rtd_theme pyyaml
1.4.4 AlmaLinux Packages
Here are the packages needed to build an image on a headless system with a supported AlmaLinux distribution:
$ sudo dnf install -y epel-release
$ sudo yum install dnf-plugins-core
$ sudo dnf config-manager --set-enabled crb
$ sudo dnf makecache
$ sudo dnf install gawk make wget tar bzip2 gzip python3 unzip perl patch diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath ccache socat perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue python3-pip python3-GitPython python3-jinja2 python3-pexpect xz which SDL-devel rpcgen mesa-libGL-devel zstd lz4 cpio glibc-langpack-en libacl
Note
Extra Packages for Enterprise Linux (i.e.
epel-release
) is a collection of packages from Fedora built on RHEL/CentOS for easy installation of packages not included in enterprise Linux by default. You need to install these packages separately.The
PowerTools/CRB
repo provides additional packages such asrpcgen
andtexinfo
.The
makecache
command consumes additional Metadata fromepel-release
.
Here are the packages needed to build Project documentation manuals:
$ sudo dnf install git make python3-pip which inkscape texlive-fncychap
$ sudo pip3 install sphinx sphinx_rtd_theme pyyaml
1.5 Required Git, tar, Python, make and gcc Versions
In order to use the build system, your host development system must meet the following version requirements for Git, tar, and Python:
Git 1.8.3.1 or greater
tar 1.28 or greater
Python 3.8.0 or greater
GNU make 4.0 or greater
If your host development system does not meet all these requirements, you can resolve this by installing a buildtools tarball that contains these tools. You can either download a pre-built tarball or use BitBake to build one.
In addition, your host development system must meet the following version requirement for gcc:
gcc 8.0 or greater
If your host development system does not meet this requirement, you can
resolve this by installing a buildtools-extended tarball that
contains additional tools, the equivalent of the Debian/Ubuntu build-essential
package.
For systems with a broken make version (e.g. make 4.2.1 without patches) but where the rest of the host tools are usable, you can use the buildtools-make tarball instead.
In the sections that follow, three different methods will be described for installing the buildtools, buildtools-extended or buildtools-make toolset.
1.5.1 Installing a Pre-Built buildtools
Tarball with install-buildtools
script
The install-buildtools
script is the easiest of the three methods by
which you can get these tools. It downloads a pre-built buildtools
installer and automatically installs the tools for you:
Execute the
install-buildtools
script. Here is an example:$ cd poky $ scripts/install-buildtools \ --without-extended-buildtools \ --base-url https://downloads.yoctoproject.org/releases/yocto \ --release yocto-5.0.999 \ --installer-version 5.0.999
During execution, the buildtools tarball will be downloaded, the checksum of the download will be verified, the installer will be run for you, and some basic checks will be run to make sure the installation is functional.
To avoid the need of
sudo
privileges, theinstall-buildtools
script will by default tell the installer to install in:/path/to/poky/buildtools
If your host development system needs the additional tools provided in the buildtools-extended tarball, you can instead execute the
install-buildtools
script with the default parameters:$ cd poky $ scripts/install-buildtools
Alternatively if your host development system has a broken
make
version such that you only need a known good version ofmake
, you can use the--make-only
option:$ cd poky $ scripts/install-buildtools --make-only
Source the tools environment setup script by using a command like the following:
$ source /path/to/poky/buildtools/environment-setup-x86_64-pokysdk-linux
After you have sourced the setup script, the tools are added to
PATH
and any other environment variables required to run the tools are initialized. The results are working versions versions of Git, tar, Python andchrpath
. And in the case of the buildtools-extended tarball, additional working versions of tools includinggcc
,make
and the other tools included inpackagegroup-core-buildessential
.
1.5.2 Downloading a Pre-Built buildtools
Tarball
If you would prefer not to use the install-buildtools
script, you can instead
download and run a pre-built buildtools installer yourself with the following
steps:
Go to https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.999/buildtools/, locate and download the
.sh
file corresponding to your host architecture and to buildtools, buildtools-extended or buildtools-make.Execute the installation script. Here is an example for the traditional installer:
$ sh ~/Downloads/x86_64-buildtools-nativesdk-standalone-5.0.999.sh
Here is an example for the extended installer:
$ sh ~/Downloads/x86_64-buildtools-extended-nativesdk-standalone-5.0.999.sh
An example for the make-only installer:
$ sh ~/Downloads/x86_64-buildtools-make-nativesdk-standalone-5.0.999.sh
During execution, a prompt appears that allows you to choose the installation directory. For example, you could choose the following:
/home/your-username/buildtools
As instructed by the installer script, you will have to source the tools environment setup script:
$ source /home/your_username/buildtools/environment-setup-x86_64-pokysdk-linux
After you have sourced the setup script, the tools are added to
PATH
and any other environment variables required to run the tools are initialized. The results are working versions versions of Git, tar, Python andchrpath
. And in the case of the buildtools-extended tarball, additional working versions of tools includinggcc
,make
and the other tools included inpackagegroup-core-buildessential
.
1.5.3 Building Your Own buildtools
Tarball
Building and running your own buildtools installer applies only when you
have a build host that can already run BitBake. In this case, you use
that machine to build the .sh
file and then take steps to transfer
and run it on a machine that does not meet the minimal Git, tar, and
Python (or gcc) requirements.
Here are the steps to take to build and run your own buildtools installer:
On the machine that is able to run BitBake, be sure you have set up your build environment with the setup script (oe-init-build-env).
Run the BitBake command to build the tarball:
$ bitbake buildtools-tarball
or to build the extended tarball:
$ bitbake buildtools-extended-tarball
or to build the make-only tarball:
$ bitbake buildtools-make-tarball
Note
The SDKMACHINE variable in your
local.conf
file determines whether you build tools for a 32-bit or 64-bit system.Once the build completes, you can find the
.sh
file that installs the tools in thetmp/deploy/sdk
subdirectory of the Build Directory. The installer file has the string “buildtools” or “buildtools-extended” in the name.Transfer the
.sh
file from the build host to the machine that does not meet the Git, tar, or Python (or gcc) requirements.On this machine, run the
.sh
file to install the tools. Here is an example for the traditional installer:$ sh ~/Downloads/x86_64-buildtools-nativesdk-standalone-5.0.999.sh
For the extended installer:
$ sh ~/Downloads/x86_64-buildtools-extended-nativesdk-standalone-5.0.999.sh
And for the make-only installer:
$ sh ~/Downloads/x86_64-buildtools-make-nativesdk-standalone-5.0.999.sh
During execution, a prompt appears that allows you to choose the installation directory. For example, you could choose the following:
/home/your_username/buildtools
Source the tools environment setup script by using a command like the following:
$ source /home/your_username/buildtools/environment-setup-x86_64-poky-linux
After you have sourced the setup script, the tools are added to
PATH
and any other environment variables required to run the tools are initialized. The results are working versions versions of Git, tar, Python andchrpath
. And in the case of the buildtools-extended tarball, additional working versions of tools includinggcc
,make
and the other tools included inpackagegroup-core-buildessential
.
2 Yocto Project Terms
Here is a list of terms and definitions users new to the Yocto Project development environment might find helpful. While some of these terms are universal, the list includes them just in case:
- Append Files
Files that append build information to a recipe file. Append files are known as BitBake append files and
.bbappend
files. The OpenEmbedded build system expects every append file to have a corresponding recipe (.bb
) file. Furthermore, the append file and corresponding recipe file must use the same root filename. The filenames can differ only in the file type suffix used (e.g.formfactor_0.0.bb
andformfactor_0.0.bbappend
).Information in append files extends or overrides the information in the similarly-named recipe file. For an example of an append file in use, see the “Appending Other Layers Metadata With Your Layer” section in the Yocto Project Development Tasks Manual.
When you name an append file, you can use the “
%
” wildcard character to allow for matching recipe names. For example, suppose you have an append file named as follows:busybox_1.21.%.bbappend
That append file would match any
busybox_1.21.x.bb
version of the recipe. So, the append file would match any of the following recipe names:busybox_1.21.1.bb busybox_1.21.2.bb busybox_1.21.3.bb busybox_1.21.10.bb busybox_1.21.25.bb
Note
The use of the “%” character is limited in that it only works directly in front of the .bbappend portion of the append file’s name. You cannot use the wildcard character in any other location of the name.
- BitBake
The task executor and scheduler used by the OpenEmbedded build system to build images. For more information on BitBake, see the BitBake User Manual.
- Board Support Package (BSP)
A group of drivers, definitions, and other components that provide support for a specific hardware configuration. For more information on BSPs, see the Yocto Project Board Support Package Developer’s Guide.
- Build Directory
This term refers to the area used by the OpenEmbedded build system for builds. The area is created when you
source
the setup environment script that is found in the Source Directory (i.e. oe-init-build-env). The TOPDIR variable points to the Build Directory.You have a lot of flexibility when creating the Build Directory. Here are some examples that show how to create the directory. The examples assume your Source Directory is named
poky
:Create the Build Directory inside your Source Directory and let the name of the Build Directory default to
build
:$ cd poky $ source oe-init-build-env
Create the Build Directory inside your home directory and specifically name it
test-builds
:$ source poky/oe-init-build-env test-builds
Provide a directory path and specifically name the Build Directory. Any intermediate folders in the pathname must exist. This next example creates a Build Directory named
YP-5.0.999
within the existing directorymybuilds
:$ source poky/oe-init-build-env mybuilds/YP-5.0.999
Note
By default, the Build Directory contains TMPDIR, which is a temporary directory the build system uses for its work. TMPDIR cannot be under NFS. Thus, by default, the Build Directory cannot be under NFS. However, if you need the Build Directory to be under NFS, you can set this up by setting TMPDIR in your
local.conf
file to use a local drive. Doing so effectively separates TMPDIR from TOPDIR, which is the Build Directory.- Build Host
The system used to build images in a Yocto Project Development environment. The build system is sometimes referred to as the development host.
- buildtools
Build tools in binary form, providing required versions of development tools (such as Git, GCC, Python and make), to run the OpenEmbedded build system on a development host without such minimum versions.
See the “Required Git, tar, Python, make and gcc Versions” paragraph in the Reference Manual for details about downloading or building an archive of such tools.
- buildtools-extended
A set of buildtools binaries extended with additional development tools, such as a required version of the GCC compiler to run the OpenEmbedded build system.
See the “Required Git, tar, Python, make and gcc Versions” paragraph in the Reference Manual for details about downloading or building an archive of such tools.
- buildtools-make
A variant of buildtools, just providing the required version of
make
to run the OpenEmbedded build system.- Classes
Files that provide for logic encapsulation and inheritance so that commonly used patterns can be defined once and then easily used in multiple recipes. For reference information on the Yocto Project classes, see the “Classes” chapter. Class files end with the
.bbclass
filename extension.- Configuration File
Files that hold global definitions of variables, user-defined variables, and hardware configuration information. These files tell the OpenEmbedded build system what to build and what to put into the image to support a particular platform.
Configuration files end with a
.conf
filename extension. Theconf/local.conf
configuration file in the Build Directory contains user-defined variables that affect every build. Themeta-poky/conf/distro/poky.conf
configuration file defines Yocto “distro” configuration variables used only when building with this policy. Machine configuration files, which are located throughout the Source Directory, define variables for specific hardware and are only used when building for that target (e.g. themachine/beaglebone.conf
configuration file defines variables for the Texas Instruments ARM Cortex-A8 development board).- Container Layer
A flexible definition that typically refers to a single Git checkout which contains multiple (and typically related) sub-layers which can be included independently in your project’s
bblayers.conf
file.In some cases, such as with OpenEmbedded’s meta-openembedded layer, the top level
meta-openembedded/
directory is not itself an actual layer, so you would never explicitly include it in abblayers.conf
file; rather, you would include any number of its layer subdirectories, such as meta-oe, meta-python and so on.On the other hand, some container layers (such as meta-security) have a top-level directory that is itself an actual layer, as well as a variety of sub-layers, both of which could be included in your
bblayers.conf
file.In either case, the phrase “container layer” is simply used to describe a directory structure which contains multiple valid OpenEmbedded layers.
- Cross-Development Toolchain
In general, a cross-development toolchain is a collection of software development tools and utilities that run on one architecture and allow you to develop software for a different, or targeted, architecture. These toolchains contain cross-compilers, linkers, and debuggers that are specific to the target architecture.
The Yocto Project supports two different cross-development toolchains:
A toolchain only used by and within BitBake when building an image for a target architecture.
A relocatable toolchain used outside of BitBake by developers when developing applications that will run on a targeted device.
Creation of these toolchains is simple and automated. For information on toolchain concepts as they apply to the Yocto Project, see the “Cross-Development Toolchain Generation” section in the Yocto Project Overview and Concepts Manual. You can also find more information on using the relocatable toolchain in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
- Extensible Software Development Kit (eSDK)
A custom SDK for application developers. This eSDK allows developers to incorporate their library and programming changes back into the image to make their code available to other application developers.
For information on the eSDK, see the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
- Image
An image is an artifact of the BitBake build process given a collection of recipes and related Metadata. Images are the binary output that run on specific hardware or QEMU and are used for specific use-cases. For a list of the supported image types that the Yocto Project provides, see the “Images” chapter.
- Initramfs
An Initial RAM Filesystem (Initramfs) is an optionally compressed cpio archive which is extracted by the Linux kernel into RAM in a special tmpfs instance, used as the initial root filesystem.
This is a replacement for the legacy init RAM disk (“initrd”) technique, booting on an emulated block device in RAM, but being less efficient because of the overhead of going through a filesystem and having to duplicate accessed file contents in the file cache in RAM, as for any block device.
Note
As far as bootloaders are concerned, Initramfs and “initrd” images are still copied to RAM in the same way. That’s why most most bootloaders refer to Initramfs images as “initrd” or “init RAM disk”.
This kind of mechanism is typically used for two reasons:
For booting the same kernel binary on multiple systems requiring different device drivers. The Initramfs image is then customized for each type of system, to include the specific kernel modules necessary to access the final root filesystem. This technique is used on all GNU / Linux distributions for desktops and servers.
For booting faster. As the root filesystem is extracted into RAM, accessing the first user-space applications is very fast, compared to having to initialize a block device, to access multiple blocks from it, and to go through a filesystem having its own overhead. For example, this allows to display a splashscreen very early, and to later take care of mounting the final root filesystem and loading less time-critical kernel drivers.
This cpio archive can either be loaded to RAM by the bootloader, or be included in the kernel binary.
For information on creating and using an Initramfs, see the “Building an Initial RAM Filesystem (Initramfs) Image” section in the Yocto Project Development Tasks Manual.
- Layer
A collection of related recipes. Layers allow you to consolidate related metadata to customize your build. Layers also isolate information used when building for multiple architectures. Layers are hierarchical in their ability to override previous specifications. You can include any number of available layers from the Yocto Project and customize the build by adding your layers after them. You can search the Layer Index for layers used within Yocto Project.
For introductory information on layers, see the “The Yocto Project Layer Model” section in the Yocto Project Overview and Concepts Manual. For more detailed information on layers, see the “Understanding and Creating Layers” section in the Yocto Project Development Tasks Manual. For a discussion specifically on BSP Layers, see the “BSP Layers” section in the Yocto Project Board Support Packages (BSP) Developer’s Guide.
- LTS
This term means “Long Term Support”, and in the context of the Yocto Project, it corresponds to selected stable releases for which bug and security fixes are provided for at least four years. See the Long Term Support Releases section for details.
- Metadata
A key element of the Yocto Project is the Metadata that is used to construct a Linux distribution and is contained in the files that the OpenEmbedded Build System parses when building an image. In general, Metadata includes recipes, configuration files, and other information that refers to the build instructions themselves, as well as the data used to control what things get built and the effects of the build. Metadata also includes commands and data used to indicate what versions of software are used, from where they are obtained, and changes or additions to the software itself (patches or auxiliary files) that are used to fix bugs or customize the software for use in a particular situation. OpenEmbedded-Core is an important set of validated metadata.
In the context of the kernel (“kernel Metadata”), the term refers to the kernel config fragments and features contained in the yocto-kernel-cache Git repository.
- Mixin
A Mixin layer is a layer which can be created by the community to add a specific feature or support a new version of some package for an LTS release. See the Long Term Support Releases section for details.
- OpenEmbedded-Core (OE-Core)
OE-Core is metadata comprised of foundational recipes, classes, and associated files that are meant to be common among many different OpenEmbedded-derived systems, including the Yocto Project. OE-Core is a curated subset of an original repository developed by the OpenEmbedded community that has been pared down into a smaller, core set of continuously validated recipes. The result is a tightly controlled and an quality-assured core set of recipes.
You can see the Metadata in the
meta
directory of the Yocto Project Source Repositories.- OpenEmbedded Build System
The build system specific to the Yocto Project. The OpenEmbedded build system is based on another project known as “Poky”, which uses BitBake as the task executor. Throughout the Yocto Project documentation set, the OpenEmbedded build system is sometimes referred to simply as “the build system”. If other build systems, such as a host or target build system are referenced, the documentation clearly states the difference.
Note
For some historical information about Poky, see the Poky term.
- Package
In the context of the Yocto Project, this term refers to a recipe’s packaged output produced by BitBake (i.e. a “baked recipe”). A package is generally the compiled binaries produced from the recipe’s sources. You “bake” something by running it through BitBake.
It is worth noting that the term “package” can, in general, have subtle meanings. For example, the packages referred to in the “Required Packages for the Build Host” section are compiled binaries that, when installed, add functionality to your Linux distribution.
Another point worth noting is that historically within the Yocto Project, recipes were referred to as packages — thus, the existence of several BitBake variables that are seemingly mis-named, (e.g. PR, PV, and PE).
- Package Groups
Arbitrary groups of software Recipes. You use package groups to hold recipes that, when built, usually accomplish a single task. For example, a package group could contain the recipes for a company’s proprietary or value-add software. Or, the package group could contain the recipes that enable graphics. A package group is really just another recipe. Because package group files are recipes, they end with the
.bb
filename extension.- Poky
Poky, which is pronounced Pock-ee, is a reference embedded distribution and a reference test configuration. Poky provides the following:
A base-level functional distro used to illustrate how to customize a distribution.
A means by which to test the Yocto Project components (i.e. Poky is used to validate the Yocto Project).
A vehicle through which you can download the Yocto Project.
Poky is not a product level distro. Rather, it is a good starting point for customization.
Note
Poky began as an open-source project initially developed by OpenedHand. OpenedHand developed Poky from the existing OpenEmbedded build system to create a commercially supportable build system for embedded Linux. After Intel Corporation acquired OpenedHand, the poky project became the basis for the Yocto Project’s build system.
- Recipe
A set of instructions for building packages. A recipe describes where you get source code, which patches to apply, how to configure the source, how to compile it and so on. Recipes also describe dependencies for libraries or for other recipes. Recipes represent the logical unit of execution, the software to build, the images to build, and use the
.bb
file extension.- Reference Kit
A working example of a system, which includes a BSP as well as a build host and other components, that can work on specific hardware.
- SBOM
This term means Software Bill of Materials. When you distribute software, it offers a description of all the components you used, their corresponding licenses, their dependencies, the changes that were applied and the known vulnerabilities that were fixed.
This can be used by the recipients of the software to assess their exposure to license compliance and security vulnerability issues.
See the Software Supply Chain article on Wikipedia for more details.
The OpenEmbedded Build System can generate such documentation for your project, in SPDX format, based on all the metadata it used to build the software images. See the “Creating a Software Bill of Materials” section of the Development Tasks manual.
- Source Directory
This term refers to the directory structure created as a result of creating a local copy of the
poky
Git repositorygit://git.yoctoproject.org/poky
or expanding a releasedpoky
tarball.Note
Creating a local copy of the poky Git repository is the recommended method for setting up your Source Directory.
Sometimes you might hear the term “poky directory” used to refer to this directory structure.
Note
The OpenEmbedded build system does not support file or directory names that contain spaces. Be sure that the Source Directory you use does not contain these types of names.
The Source Directory contains BitBake, Documentation, Metadata and other files that all support the Yocto Project. Consequently, you must have the Source Directory in place on your development system in order to do any development using the Yocto Project.
When you create a local copy of the Git repository, you can name the repository anything you like. Throughout much of the documentation, “poky” is used as the name of the top-level folder of the local copy of the poky Git repository. So, for example, cloning the
poky
Git repository results in a local Git repository whose top-level folder is also named “poky”.While it is not recommended that you use tarball extraction to set up the Source Directory, if you do, the top-level directory name of the Source Directory is derived from the Yocto Project release tarball. For example, downloading and unpacking poky tarballs from https://downloads.yoctoproject.org/releases/yocto/yocto-5.0.999/ results in a Source Directory whose root folder is named poky.
It is important to understand the differences between the Source Directory created by unpacking a released tarball as compared to cloning
git://git.yoctoproject.org/poky
. When you unpack a tarball, you have an exact copy of the files based on the time of release — a fixed release point. Any changes you make to your local files in the Source Directory are on top of the release and will remain local only. On the other hand, when you clone thepoky
Git repository, you have an active development repository with access to the upstream repository’s branches and tags. In this case, any local changes you make to the local Source Directory can be later applied to active development branches of the upstreampoky
Git repository.For more information on concepts related to Git repositories, branches, and tags, see the “Repositories, Tags, and Branches” section in the Yocto Project Overview and Concepts Manual.
- SPDX
This term means Software Package Data Exchange, and is used as an open standard for providing a Software Bill of Materials (SBOM). This standard is developed through a Linux Foundation project and is used by the OpenEmbedded Build System to provide an SBOM associated to each software image.
For details, see Wikipedia’s SPDX page and the “Creating a Software Bill of Materials” section of the Development Tasks manual.
- Sysroot
When cross-compiling, the target file system may be differently laid out and contain different things compared to the host system. The concept of a sysroot is directory which looks like the target filesystem and can be used to cross-compile against.
In the context of cross-compiling toolchains, a sysroot typically contains C library and kernel headers, plus the compiled binaries for the C library. A multilib toolchain can contain multiple variants of the C library binaries, each compiled for a target instruction set (such as
armv5
,armv7
andarmv8
), and possibly optimized for a specific CPU core.In the more specific context of the OpenEmbedded build System and of the Yocto Project, each recipe has two sysroots:
A target sysroot contains all the target libraries and headers needed to build the recipe.
A native sysroot contains all the host files and executables needed to build the recipe.
See the SYSROOT_* variables controlling how sysroots are created and stored.
- Task
A per-recipe unit of execution for BitBake (e.g. do_compile, do_fetch, do_patch, and so forth). One of the major benefits of the build system is that, since each recipe will typically spawn the execution of numerous tasks, it is entirely possible that many tasks can execute in parallel, either tasks from separate recipes or independent tasks within the same recipe, potentially up to the parallelism of your build system.
- Toaster
A web interface to the Yocto Project’s OpenEmbedded Build System. The interface enables you to configure and run your builds. Information about builds is collected and stored in a database. For information on Toaster, see the Toaster User Manual.
- Upstream
A reference to source code or repositories that are not local to the development system but located in a remote area that is controlled by the maintainer of the source code. For example, in order for a developer to work on a particular piece of code, they need to first get a copy of it from an “upstream” source.
3 Yocto Project Releases and the Stable Release Process
The Yocto Project release process is predictable and consists of both major and minor (point) releases. This brief chapter provides information on how releases are named, their life cycle, and their stability.
3.1 Major and Minor Release Cadence
The Yocto Project delivers major releases (e.g. 5.0.999) using a six month cadence roughly timed each April and October of the year. Here are examples of some major YP releases with their codenames also shown. See the “Major Release Codenames” section for information on codenames used with major releases.
4.1 (“Langdale”)
4.0 (“Kirkstone”)
3.4 (“Honister”)
While the cadence is never perfect, this timescale facilitates regular releases that have strong QA cycles while not overwhelming users with too many new releases. The cadence is predictable and avoids many major holidays in various geographies.
The Yocto project delivers minor (point) releases on an unscheduled basis and are usually driven by the accumulation of enough significant fixes or enhancements to the associated major release. Some example past point releases are:
4.1.3
4.0.8
3.4.4
The point release indicates a point in the major release branch where a full QA cycle and release process validates the content of the new branch.
Note
Realize that there can be patches merged onto the stable release branches as and when they become available.
3.2 Major Release Codenames
Each major release receives a codename that identifies the release in the Yocto Project Source Repositories. The concept is that branches of Metadata with the same codename are likely to be compatible and thus work together.
Note
Codenames are associated with major releases because a Yocto Project release number (e.g. 5.0.999) could conflict with a given layer or company versioning scheme. Codenames are unique, interesting, and easily identifiable.
Releases are given a nominal release version as well but the codename is used in repositories for this reason. You can find information on Yocto Project releases and codenames at https://wiki.yoctoproject.org/wiki/Releases.
Our Release Information detail how to migrate from one release of the Yocto Project to the next.
3.3 Stable Release Process
Once released, the release enters the stable release process at which time a person is assigned as the maintainer for that stable release. This maintainer monitors activity for the release by investigating and handling nominated patches and backport activity. Only fixes and enhancements that have first been applied on the “master” branch (i.e. the current, in-development branch) are considered for backporting to a stable release.
Note
The current Yocto Project policy regarding backporting is to consider bug fixes and security fixes only. Policy dictates that features are not backported to a stable release. This policy means generic recipe version upgrades are unlikely to be accepted for backporting. The exception to this policy occurs when there is a strong reason such as the fix happens to also be the preferred upstream approach.
3.4 Long Term Support Releases
While stable releases are supported for a duration of seven months, some specific ones are now supported for a longer period by the Yocto Project, and are called Long Term Support (LTS) releases.
When significant issues are found, LTS releases allow to publish fixes not only for the current stable release, but also to the LTS releases that are still supported. Older stable releases which have reached their End of Life (EOL) won’t receive such updates.
This started with version 3.1 (“Dunfell”), released in April 2020, which the project initially committed to supporting for two years, but this duration was later extended to four years. Similarly, the following LTS release, version 4.0 (“Kirkstone”), was released two years later in May 2022 and the project committed to supporting it for four years too.
Therefore, a new LTS release is made every two years and is supported for four years. This offers more stability to project users and leaves more time to upgrade to the following LTS release.
See https://wiki.yoctoproject.org/wiki/Stable_Release_and_LTS for details about the management of stable and LTS releases.
Note
In some circumstances, a layer can be created by the community in order to add a specific feature or support a new version of some package for an LTS release. This is called a Mixin layer. These are thin and specific purpose layers which can be stacked with an LTS release to “mix” a specific feature into that build. These are created on an as-needed basis and maintained by the people who need them.
Policies on testing these layers depend on how widespread their usage is and determined on a case-by-case basis. You can find some Mixin layers in the meta-lts-mixins repository. While the Yocto Project provides hosting for those repositories, it does not provides testing on them. Other Mixin layers may be released elsewhere by the wider community.
3.5 Testing and Quality Assurance
Part of the Yocto Project development and release process is quality assurance through the execution of test strategies. Test strategies provide the Yocto Project team a way to ensure a release is validated. Additionally, because the test strategies are visible to you as a developer, you can validate your projects. This section overviews the available test infrastructure used in the Yocto Project. For information on how to run available tests on your projects, see the “Performing Automated Runtime Testing” section in the Yocto Project Development Tasks Manual.
The QA/testing infrastructure is woven into the project to the point where core developers take some of it for granted. The infrastructure consists of the following pieces:
bitbake-selftest
: A standalone command that runs unit tests on key pieces of BitBake and its fetchers.sanity: This automatically included class checks the build environment for missing tools (e.g.
gcc
) or common misconfigurations such as MACHINE set incorrectly.insane: This class checks the generated output from builds for sanity. For example, if building for an ARM target, did the build produce ARM binaries. If, for example, the build produced PPC binaries then there is a problem.
testimage: This class performs runtime testing of images after they are built. The tests are usually used with QEMU to boot the images and check the combined runtime result boot operation and functions. However, the test can also use the IP address of a machine to test.
ptest: Runs tests against packages produced during the build for a given piece of software. The test allows the packages to be run within a target image.
oe-selftest
: Tests combinations of BitBake invocations. These tests operate outside the OpenEmbedded build system itself. Theoe-selftest
can run all tests by default or can run selected tests or test suites.
Originally, much of this testing was done manually. However, significant effort has been made to automate the tests so that more people can use them and the Yocto Project development team can run them faster and more efficiently.
The Yocto Project’s main Autobuilder (https://autobuilder.yoctoproject.org) publicly tests each Yocto Project release’s code in the openembedded-core, poky and bitbake repositories. The testing occurs for both the current state of the “master” branch and also for submitted patches. Testing for submitted patches usually occurs in the in the “master-next” branch in the poky repository.
Note
You can find all these branches in the Yocto Project Source Repositories.
Testing within these public branches ensures in a publicly visible way that all of the main supposed architectures and recipes in OE-Core successfully build and behave properly.
Various features such as multilib
, sub architectures (e.g. x32
,
poky-tiny
, musl
, no-x11
and and so forth),
bitbake-selftest
, and oe-selftest
are tested as part of the QA
process of a release. Complete testing and validation for a release
takes the Autobuilder workers several hours.
Note
The Autobuilder workers are non-homogeneous, which means regular testing across a variety of Linux distributions occurs. The Autobuilder is limited to only testing QEMU-based setups and not real hardware.
Finally, in addition to the Autobuilder’s tests, the Yocto Project QA team also performs testing on a variety of platforms, which includes actual hardware, to ensure expected results.
4 Source Directory Structure
The Source Directory consists of numerous files, directories and subdirectories; understanding their locations and contents is key to using the Yocto Project effectively. This chapter describes the Source Directory and gives information about those files and directories.
For information on how to establish a local Source Directory on your development system, see the “Locating Yocto Project Source Files” section in the Yocto Project Development Tasks Manual.
Note
The OpenEmbedded build system does not support file or directory names that contain spaces. Be sure that the Source Directory you use does not contain these types of names.
4.1 Top-Level Core Components
This section describes the top-level components of the Source Directory.
4.1.1 bitbake/
This directory includes a copy of BitBake for ease of use. The copy usually matches the current stable BitBake release from the BitBake project. BitBake, a Metadata interpreter, reads the Yocto Project Metadata and runs the tasks defined by that data. Failures are usually caused by errors in your Metadata and not from BitBake itself.
When you run the bitbake
command, the main BitBake executable (which
resides in the bitbake/bin/
directory) starts. Sourcing the
environment setup script (i.e. oe-init-build-env) places
the scripts/
and bitbake/bin/
directories (in that order) into
the shell’s PATH
environment variable.
For more information on BitBake, see the BitBake User Manual.
4.1.2 build/
This directory contains user configuration files and the output
generated by the OpenEmbedded build system in its standard configuration
where the source tree is combined with the output. The Build Directory
is created initially when you source
the OpenEmbedded build environment
setup script (i.e. oe-init-build-env).
It is also possible to place output and configuration files in a
directory separate from the Source Directory by
providing a directory name when you source
the setup script. For
information on separating output from your local Source Directory files
(commonly described as an “out of tree” build), see the
“oe-init-build-env” section.
See the “The Build Directory — build/” section for details about the contents of the Build Directory.
4.1.3 documentation/
This directory holds the source for the Yocto Project documentation as
well as templates and tools that allow you to generate PDF and HTML
versions of the manuals. Each manual is contained in its own sub-folder;
for example, the files for this reference manual reside in the
ref-manual/
directory.
4.1.4 meta/
This directory contains the minimal, underlying OpenEmbedded-Core
metadata. The directory holds recipes, common classes, and machine
configuration for strictly emulated targets (qemux86
, qemuarm
,
and so forth.)
4.1.5 meta-poky/
Designed above the meta/
content, this directory adds just enough
metadata to define the Poky reference distribution.
4.1.6 meta-yocto-bsp/
This directory contains the Yocto Project reference hardware Board Support Packages (BSPs). For more information on BSPs, see the Yocto Project Board Support Package Developer’s Guide.
4.1.7 meta-selftest/
This directory adds additional recipes and append files used by the
OpenEmbedded selftests to verify the behavior of the build system. You
do not have to add this layer to your bblayers.conf
file unless you
want to run the selftests.
4.1.8 meta-skeleton/
This directory contains template recipes for BSP and kernel development.
4.1.9 scripts/
This directory contains various integration scripts that implement extra
functionality in the Yocto Project environment (e.g. QEMU scripts). The
oe-init-build-env script prepends this directory to the
shell’s PATH
environment variable.
The scripts
directory has useful scripts that assist in contributing
back to the Yocto Project, such as create-pull-request
and
send-pull-request
.
4.1.10 oe-init-build-env
This script sets up the OpenEmbedded build environment. Running this
script with the source
command in a shell makes changes to PATH
and sets other core BitBake variables based on the current working
directory. You need to run an environment setup script before running
BitBake commands. The script uses other scripts within the scripts
directory to do the bulk of the work.
When you run this script, your Yocto Project environment is set up, a Build Directory is created, your working directory becomes the Build Directory, and you are presented with some simple suggestions as to what to do next, including a list of some possible targets to build. Here is an example:
$ source oe-init-build-env
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
You can also run generated QEMU images with a command like 'runqemu qemux86-64'
The default output of the oe-init-build-env
script is from the
conf-summary.txt
and conf-notes.txt
files, which are found in the meta-poky
directory
within the Source Directory. If you design a
custom distribution, you can include your own versions of these
configuration files where you can provide a brief summary and detailed usage
notes, such as a list of the targets defined by your distribution.
See the
“Creating a Custom Template Configuration Directory”
section in the Yocto Project Development Tasks Manual for more
information.
By default, running this script without a Build Directory argument
creates the build/
directory in your current working directory. If
you provide a Build Directory argument when you source
the script,
you direct the OpenEmbedded build system to create a Build Directory of
your choice. For example, the following command creates a
Build Directory named mybuilds/
that is outside of the
Source Directory:
$ source oe-init-build-env ~/mybuilds
The OpenEmbedded build system uses the template configuration files, which
are found by default in the meta-poky/conf/templates/default
directory in the Source
Directory. See the
“Creating a Custom Template Configuration Directory”
section in the Yocto Project Development Tasks Manual for more
information.
Note
The OpenEmbedded build system does not support file or directory
names that contain spaces. If you attempt to run the oe-init-build-env
script from a Source Directory that contains spaces in either the
filenames or directory names, the script returns an error indicating
no such file or directory. Be sure to use a Source Directory free of
names containing spaces.
4.1.11 LICENSE, README, and README.hardware
These files are standard top-level files.
4.2 The Build Directory — build/
The OpenEmbedded build system creates the Build Directory when you run
the build environment setup script oe-init-build-env. If you do not
give the Build Directory a specific name when you run the setup script,
the name defaults to build/
.
For subsequent parsing and processing, the name of the Build directory is available via the TOPDIR variable.
4.2.1 build/buildhistory/
The OpenEmbedded build system creates this directory when you enable build history via the buildhistory class file. The directory organizes build information into image, packages, and SDK subdirectories. For information on the build history feature, see the “Maintaining Build Output Quality” section in the Yocto Project Development Tasks Manual.
4.2.2 build/cache/
This directory contains several internal files used by the OpenEmbedded build system.
It also contains sanity_info
, a text file keeping track of important
build information such as the values of TMPDIR, SSTATE_DIR,
as well as the name and version of the host distribution.
4.2.3 build/conf/local.conf
This configuration file contains all the local user configurations for
your build environment. The local.conf
file contains documentation
on the various configuration options. Any variable set here overrides
any variable set elsewhere within the environment unless that variable
is hard-coded within a file (e.g. by using ‘=’ instead of ‘?=’). Some
variables are hard-coded for various reasons but such variables are
relatively rare.
At a minimum, you would normally edit this file to select the target MACHINE, which package types you wish to use (PACKAGE_CLASSES), and the location from which you want to access downloaded files (DL_DIR).
If local.conf
is not present when you start the build, the
OpenEmbedded build system creates it from local.conf.sample
when you
source
the top-level build environment setup script
oe-init-build-env.
The source local.conf.sample
file used depends on the
TEMPLATECONF script variable, which defaults to meta-poky/conf/templates/default
when you are building from the Yocto Project development environment,
and to meta/conf/templates/default
when you are building from the OpenEmbedded-Core
environment. Because the script variable points to the source of the
local.conf.sample
file, this implies that you can configure your
build environment from any layer by setting the variable in the
top-level build environment setup script as follows:
TEMPLATECONF=your_layer/conf/templates/your_template_name
Once the build process gets the sample
file, it uses sed
to substitute final
${
OEROOT}
values for all
##OEROOT##
values.
Note
You can see how the TEMPLATECONF variable is used by looking at the
scripts/oe-setup-builddir
script in the Source Directory.
You can find the Yocto Project version of the local.conf.sample
file in
the meta-poky/conf/templates/default
directory.
4.2.4 build/conf/bblayers.conf
This configuration file defines
layers,
which are directory trees, traversed (or walked) by BitBake. The
bblayers.conf
file uses the BBLAYERS
variable to list the layers BitBake tries to find.
If bblayers.conf
is not present when you start the build, the
OpenEmbedded build system creates it from bblayers.conf.sample
when
you source
the top-level build environment setup script (i.e.
oe-init-build-env).
As with the local.conf
file, the source bblayers.conf.sample
file used depends on the TEMPLATECONF script variable, which
defaults to meta-poky/conf/templates/default
when you are building from the Yocto
Project development environment, and to meta/conf/templates/default
when you are
building from the OpenEmbedded-Core environment. Because the script
variable points to the source of the bblayers.conf.sample
file, this
implies that you can base your build from any layer by setting the
variable in the top-level build environment setup script as follows:
TEMPLATECONF=your_layer/conf/templates/your_template_name
Once the build process gets the sample file, it uses sed
to substitute final
${
OEROOT}
values for all ##OEROOT##
values.
Note
You can see how the TEMPLATECONF variable is defined by the scripts/oe-setup-builddir
script in the Source Directory. You can find the Yocto Project
version of the bblayers.conf.sample
file in the meta-poky/conf/templates/default
directory.
4.2.5 build/downloads/
This directory contains downloaded upstream source tarballs. You can reuse the directory for multiple builds or move the directory to another location. You can control the location of this directory through the DL_DIR variable.
4.2.6 build/sstate-cache/
This directory contains the shared state cache. You can reuse the directory for multiple builds or move the directory to another location. You can control the location of this directory through the SSTATE_DIR variable.
4.2.7 build/tmp/
The OpenEmbedded build system creates and uses this directory for all the build system’s output. The TMPDIR variable points to this directory.
BitBake creates this directory if it does not exist. As a last resort,
to clean up a build and start it from scratch (other than the
downloads), you can remove everything in the tmp
directory or get
rid of the directory completely. If you do, you should also completely
remove the build/sstate-cache
directory.
4.2.7.1 build/tmp/buildstats/
This directory stores the build statistics as generated by the buildstats class.
4.2.7.2 build/tmp/cache/
When BitBake parses the metadata (recipes and configuration files), it
caches the results in build/tmp/cache/
to speed up future builds.
The results are stored on a per-machine basis.
During subsequent builds, BitBake checks each recipe (together with, for example, any files included or appended to it) to see if they have been modified. Changes can be detected, for example, through file modification time (mtime) changes and hashing of file contents. If no changes to the file are detected, then the parsed result stored in the cache is reused. If the file has changed, it is reparsed.
4.2.7.3 build/tmp/deploy/
This directory contains any “end result” output from the OpenEmbedded
build process. The DEPLOY_DIR variable points
to this directory. For more detail on the contents of the deploy
directory, see the
“Images” and
“Application Development SDK” sections in the Yocto
Project Overview and Concepts Manual.
4.2.7.3.1 build/tmp/deploy/deb/
This directory receives any .deb
packages produced by the build
process. The packages are sorted into feeds for different architecture
types.
4.2.7.3.2 build/tmp/deploy/rpm/
This directory receives any .rpm
packages produced by the build
process. The packages are sorted into feeds for different architecture
types.
4.2.7.3.3 build/tmp/deploy/ipk/
This directory receives .ipk
packages produced by the build process.
4.2.7.3.4 build/tmp/deploy/licenses/
This directory receives package licensing information. For example, the
directory contains sub-directories for bash
, busybox
, and
glibc
(among others) that in turn contain appropriate COPYING
license files with other licensing information. For information on
licensing, see the
“Maintaining Open Source License Compliance During Your Product’s Lifecycle”
section in the Yocto Project Development Tasks Manual.
4.2.7.3.5 build/tmp/deploy/images/
This directory is populated with the basic output objects of the build (think of them as the “generated artifacts” of the build process), including things like the boot loader image, kernel, root filesystem and more. If you want to flash the resulting image from a build onto a device, look here for the necessary components.
Be careful when deleting files in this directory. You can safely delete
old images from this directory (e.g. core-image-*
). However, the
kernel (*zImage*
, *uImage*
, etc.), bootloader and other
supplementary files might be deployed here prior to building an image.
Because these files are not directly produced from the image, if you
delete them they will not be automatically re-created when you build the
image again.
If you do accidentally delete files here, you will need to force them to be re-created. In order to do that, you will need to know the target that produced them. For example, these commands rebuild and re-create the kernel files:
$ bitbake -c clean virtual/kernel
$ bitbake virtual/kernel
4.2.7.3.6 build/tmp/deploy/sdk/
The OpenEmbedded build system creates this directory to hold toolchain installer scripts which, when executed, install the sysroot that matches your target hardware. You can find out more about these installers in the “Building an SDK Installer” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
4.2.7.4 build/tmp/sstate-control/
The OpenEmbedded build system uses this directory for the shared state manifest files. The shared state code uses these files to record the files installed by each sstate task so that the files can be removed when cleaning the recipe or when a newer version is about to be installed. The build system also uses the manifests to detect and produce a warning when files from one task are overwriting those from another.
4.2.7.5 build/tmp/sysroots-components/
This directory is the location of the sysroot contents that the task
do_prepare_recipe_sysroot
links or copies into the recipe-specific sysroot for each recipe listed
in DEPENDS. Population of this directory is
handled through shared state, while the path is specified by the
COMPONENTS_DIR variable. Apart from a few
unusual circumstances, handling of the sysroots-components
directory
should be automatic, and recipes should not directly reference
build/tmp/sysroots-components
.
4.2.7.6 build/tmp/sysroots/
Previous versions of the OpenEmbedded build system used to create a
global shared sysroot per machine along with a native sysroot. Since
the 2.3 version of the Yocto Project, there are sysroots in
recipe-specific WORKDIR directories. Thus, the
build/tmp/sysroots/
directory is unused.
Note
The build/tmp/sysroots/
directory can still be populated using the
bitbake build-sysroots
command and can be used for compatibility in some
cases. However, in general it is not recommended to populate this directory.
Individual recipe-specific sysroots should be used.
4.2.7.7 build/tmp/stamps/
This directory holds information that BitBake uses for accounting purposes to track what tasks have run and when they have run. The directory is sub-divided by architecture, package name, and version. Here is an example:
stamps/all-poky-linux/distcc-config/1.0-r0.do_build-2fdd....2do
Although the files in the directory are empty of data, BitBake uses the filenames and timestamps for tracking purposes.
For information on how BitBake uses stamp files to determine if a task should be rerun, see the “Stamp Files and the Rerunning of Tasks” section in the Yocto Project Overview and Concepts Manual.
4.2.7.8 build/tmp/log/
This directory contains general logs that are not otherwise placed using
the package’s WORKDIR. Examples of logs are the output from the
do_check_pkg
or do_distro_check
tasks. Running a build does not
necessarily mean this directory is created.
4.2.7.9 build/tmp/work/
This directory contains architecture-specific work sub-directories for packages built by BitBake. All tasks execute from the appropriate work directory. For example, the source for a particular package is unpacked, patched, configured and compiled all within its own work directory. Within the work directory, organization is based on the package group and version for which the source is being compiled as defined by the WORKDIR.
It is worth considering the structure of a typical work directory. As an
example, consider linux-yocto-kernel-3.0
on the machine qemux86
built within the Yocto Project. For this package, a work directory of
tmp/work/qemux86-poky-linux/linux-yocto/3.0+git1+<.....>
, referred
to as the WORKDIR, is created. Within this directory, the source is
unpacked to linux-qemux86-standard-build
and then patched by Quilt.
(See the “Using Quilt in Your Workflow” section in
the Yocto Project Development Tasks Manual for more information.) Within
the linux-qemux86-standard-build
directory, standard Quilt
directories linux-3.0/patches
and linux-3.0/.pc
are created, and
standard Quilt commands can be used.
There are other directories generated within WORKDIR. The most
important directory is WORKDIR/temp/
, which has log files for each
task (log.do_*.pid
) and contains the scripts BitBake runs for each
task (run.do_*.pid
). The WORKDIR/image/
directory is where “make
install” places its output that is then split into sub-packages within
WORKDIR/packages-split/
.
4.2.7.9.1 build/tmp/work/tunearch/recipename/version/
The recipe work directory — ${WORKDIR}
.
As described earlier in the
“build/tmp/sysroots/” section,
beginning with the 2.3 release of the Yocto Project, the OpenEmbedded
build system builds each recipe in its own work directory (i.e.
WORKDIR). The path to the work directory is
constructed using the architecture of the given build (e.g.
TUNE_PKGARCH, MACHINE_ARCH, or “allarch”), the recipe
name, and the version of the recipe (i.e.
PE:
PV-
PR).
Here are key subdirectories within each recipe work directory:
${WORKDIR}/temp
: Contains the log files of each task executed for this recipe, the “run” files for each executed task, which contain the code run, and alog.task_order
file, which lists the order in which tasks were executed.${WORKDIR}/image
: Contains the output of the do_install task, which corresponds to the${
D}
variable in that task.${WORKDIR}/pseudo
: Contains the pseudo database and log for any tasks executed under pseudo for the recipe.${WORKDIR}/sysroot-destdir
: Contains the output of the do_populate_sysroot task.${WORKDIR}/package
: Contains the output of the do_package task before the output is split into individual packages.${WORKDIR}/packages-split
: Contains the output of the do_package task after the output has been split into individual packages. There are subdirectories for each individual package created by the recipe.${WORKDIR}/recipe-sysroot
: A directory populated with the target dependencies of the recipe. This directory looks like the target filesystem and contains libraries that the recipe might need to link against (e.g. the C library).${WORKDIR}/recipe-sysroot-native
: A directory populated with the native dependencies of the recipe. This directory contains the tools the recipe needs to build (e.g. the compiler, Autoconf, libtool, and so forth).${WORKDIR}/build
: This subdirectory applies only to recipes that support builds where the source is separate from the build artifacts. The OpenEmbedded build system uses this directory as a separate build directory (i.e.${
B}
).
4.3 The Metadata — meta/
As mentioned previously, Metadata is the core of the Yocto Project. Metadata has several important subdivisions:
4.3.1 meta/classes*/
These directories contain the *.bbclass
files. Class files are used to
abstract common code so it can be reused by multiple packages. Every
package inherits the base file. Examples of other important
classes are autotools*, which in theory allows any
Autotool-enabled package to work with the Yocto Project with minimal
effort. Another example is kernel that contains common code
and functions for working with the Linux kernel. Functions like image
generation or packaging also have their specific class files such as
image, rootfs* and
package*.bbclass.
For reference information on classes, see the “Classes” chapter.
4.3.2 meta/conf/
This directory contains the core set of configuration files that start
from bitbake.conf
and from which all other configuration files are
included. See the include statements at the end of the bitbake.conf
file and you will note that even local.conf
is loaded from there.
While bitbake.conf
sets up the defaults, you can often override
these by using the (local.conf
) file, machine file or the
distribution configuration file.
4.3.2.1 meta/conf/machine/
This directory contains all the machine configuration files. If you set
MACHINE = "qemux86"
, the OpenEmbedded build system looks for a
qemux86.conf
file in this directory. The include
directory
contains various data common to multiple machines. If you want to add
support for a new machine to the Yocto Project, look in this directory.
4.3.2.2 meta/conf/distro/
The contents of this directory controls any distribution-specific
configurations. For the Yocto Project, the defaultsetup.conf
is the
main file here. This directory includes the versions and the SRCDATE
definitions for applications that are configured here. An example of an
alternative configuration might be poky-bleeding.conf
. Although this
file mainly inherits its configuration from Poky.
4.3.2.3 meta/conf/machine-sdk/
The OpenEmbedded build system searches this directory for configuration files that correspond to the value of SDKMACHINE. By default, 32-bit and 64-bit x86 files ship with the Yocto Project that support some SDK hosts. However, it is possible to extend that support to other SDK hosts by adding additional configuration files in this subdirectory within another layer.
4.3.3 meta/files/
This directory contains common license files and several text files used by the build system. The text files contain minimal device information and lists of files and directories with known permissions.
4.3.4 meta/lib/
This directory contains OpenEmbedded Python library code used during the
build process. It is enabled via the addpylib
directive in
meta/conf/local.conf
. For more information, see
Extending Python Library Code.
4.3.5 meta/recipes-bsp/
This directory contains anything linking to specific hardware or hardware configuration information such as “u-boot” and “grub”.
4.3.6 meta/recipes-connectivity/
This directory contains libraries and applications related to communication with other devices.
4.3.7 meta/recipes-core/
This directory contains what is needed to build a basic working Linux image including commonly used dependencies.
4.3.8 meta/recipes-devtools/
This directory contains tools that are primarily used by the build system. The tools, however, can also be used on targets.
4.3.9 meta/recipes-extended/
This directory contains non-essential applications that add features compared to the alternatives in core. You might need this directory for full tool functionality.
4.3.10 meta/recipes-gnome/
This directory contains all things related to the GTK+ application framework.
4.3.11 meta/recipes-graphics/
This directory contains X and other graphically related system libraries.
4.3.12 meta/recipes-kernel/
This directory contains the kernel and generic applications and libraries that have strong kernel dependencies.
4.3.13 meta/recipes-multimedia/
This directory contains codecs and support utilities for audio, images and video.
4.3.14 meta/recipes-rt/
This directory contains package and image recipes for using and testing
the PREEMPT_RT
kernel.
4.3.15 meta/recipes-sato/
This directory contains the Sato demo/reference UI/UX and its associated applications and configuration data.
4.3.16 meta/recipes-support/
This directory contains recipes used by other recipes, but that are not directly included in images (i.e. dependencies of other recipes).
4.3.17 meta/site/
This directory contains a list of cached results for various architectures. Because certain “autoconf” test results cannot be determined when cross-compiling due to the tests not able to run on a live system, the information in this directory is passed to “autoconf” for the various architectures.
4.3.18 meta/recipes.txt
This file is a description of the contents of recipes-*
.
5 Classes
Class files are used to abstract common functionality and share it
amongst multiple recipe (.bb
) files. To use a class file, you simply
make sure the recipe inherits the class. In most cases, when a recipe
inherits a class it is enough to enable its features. There are cases,
however, where in the recipe you might need to set variables or override
some default behavior.
Any Metadata usually found in a recipe can also be
placed in a class file. Class files are identified by the extension
.bbclass
and are usually placed in one of a set of subdirectories
beneath the meta*/
directory found in the Source Directory:
classes-recipe/
- classes intended to be inherited by recipes individually
classes-global/
- classes intended to be inherited globally
classes/
- classes whose usage context is not clearly defined
Class files can also be pointed to by
BUILDDIR (e.g. build/
) in the same way as
.conf
files in the conf
directory. Class files are searched for
in BBPATH using the same method by which .conf
files are searched.
This chapter discusses only the most useful and important classes. Other
classes do exist within the meta/classes*
directories in the Source
Directory. You can reference the .bbclass
files directly for more
information.
5.1 allarch
The allarch class is inherited by recipes that do not produce architecture-specific output. The class disables functionality that is normally needed for recipes that produce executable binaries (such as building the cross-compiler and a C library as pre-requisites, and splitting out of debug symbols during packaging).
Note
Unlike some distro recipes (e.g. Debian), OpenEmbedded recipes that produce packages that depend on tunings through use of the RDEPENDS and TUNE_PKGARCH variables, should never be configured for all architectures using allarch. This is the case even if the recipes do not produce architecture-specific output.
Configuring such recipes for all architectures causes the do_package_write_* tasks to have different signatures for the machines with different tunings. Additionally, unnecessary rebuilds occur every time an image for a different MACHINE is built even when the recipe never changes.
By default, all recipes inherit the base and package classes, which enable functionality needed for recipes that produce executable output. If your recipe, for example, only produces packages that contain configuration files, media files, or scripts (e.g. Python and Perl), then it should inherit the allarch class.
5.2 archiver
The archiver class supports releasing source code and other materials with the binaries.
For more details on the source archiver, see the “Maintaining Open Source License Compliance During Your Product’s Lifecycle” section in the Yocto Project Development Tasks Manual. You can also see the ARCHIVER_MODE variable for information about the variable flags (varflags) that help control archive creation.
5.3 autotools*
The autotools* classes support packages built with the GNU Autotools.
The autoconf
, automake
, and libtool
packages bring
standardization. This class defines a set of tasks (e.g. configure
,
compile
and so forth) that work for all Autotooled packages. It
should usually be enough to define a few standard variables and then
simply inherit autotools
. These classes can also work with software
that emulates Autotools. For more information, see the
“Building an Autotooled Package” section
in the Yocto Project Development Tasks Manual.
By default, the autotools* classes use out-of-tree builds (i.e.
autotools.bbclass
building with B != S
).
If the software being built by a recipe does not support using out-of-tree builds, you should have the recipe inherit the autotools-brokensep class. The autotools-brokensep class behaves the same as the autotools* class but builds with B == S. This method is useful when out-of-tree build support is either not present or is broken.
Note
It is recommended that out-of-tree support be fixed and used if at all possible.
It’s useful to have some idea of how the tasks defined by the autotools* classes work and what they do behind the scenes.
do_configure — regenerates the configure script (using
autoreconf
) and then launches it with a standard set of arguments used during cross-compilation. You can pass additional parameters toconfigure
through the EXTRA_OECONF or PACKAGECONFIG_CONFARGS variables.do_compile — runs
make
with arguments that specify the compiler and linker. You can pass additional arguments through the EXTRA_OEMAKE variable.do_install — runs
make install
and passes in${
D}
asDESTDIR
.
5.4 base
The base class is special in that every .bb
file implicitly
inherits the class. This class contains definitions for standard basic
tasks such as fetching, unpacking, configuring (empty by default),
compiling (runs any Makefile
present), installing (empty by default)
and packaging (empty by default). These tasks are often overridden or
extended by other classes such as the autotools* class or the
package class.
The class also contains some commonly used functions such as
oe_runmake
, which runs make
with the arguments specified in
EXTRA_OEMAKE variable as well as the
arguments passed directly to oe_runmake
.
5.5 bash-completion
Sets up packaging and dependencies appropriate for recipes that build software that includes bash-completion data.
5.6 bin_package
The bin_package class is a helper class for recipes that extract the contents of a binary package (e.g. an RPM) and install those contents rather than building the binary from source. The binary package is extracted and new packages in the configured output package format are created. Extraction and installation of proprietary binaries is a good example use for this class.
Note
For RPMs and other packages that do not contain a subdirectory, you
should specify an appropriate fetcher parameter to point to the
subdirectory. For example, if BitBake is using the Git fetcher (git://
),
the “subpath” parameter limits the checkout to a specific subpath
of the tree. Here is an example where ${BP}
is used so that the files
are extracted into the subdirectory expected by the default value of
S:
SRC_URI = "git://example.com/downloads/somepackage.rpm;branch=main;subpath=${BP}"
See the “Fetchers” section in the BitBake User Manual for more information on supported BitBake Fetchers.
5.7 binconfig
The binconfig class helps to correct paths in shell scripts.
Before pkg-config
had become widespread, libraries shipped shell
scripts to give information about the libraries and include paths needed
to build software (usually named LIBNAME-config
). This class assists
any recipe using such scripts.
During staging, the OpenEmbedded build system installs such scripts into
the sysroots/
directory. Inheriting this class results in all paths
in these scripts being changed to point into the sysroots/
directory
so that all builds that use the script use the correct directories for
the cross compiling layout. See the
BINCONFIG_GLOB variable for more
information.
5.8 binconfig-disabled
An alternative version of the binconfig
class, which disables binary configuration scripts by making them return
an error in favor of using pkg-config
to query the information. The
scripts to be disabled should be specified using the BINCONFIG
variable within the recipe inheriting the class.
5.9 buildhistory
The buildhistory class records a history of build output metadata, which can be used to detect possible regressions as well as used for analysis of the build output. For more information on using Build History, see the “Maintaining Build Output Quality” section in the Yocto Project Development Tasks Manual.
5.10 buildstats
The buildstats class records performance statistics about each task executed during the build (e.g. elapsed time, CPU usage, and I/O usage).
When you use this class, the output goes into the
BUILDSTATS_BASE directory, which defaults
to ${TMPDIR}/buildstats/
. You can analyze the elapsed time using
scripts/pybootchartgui/pybootchartgui.py
, which produces a cascading
chart of the entire build process and can be useful for highlighting
bottlenecks.
Collecting build statistics is enabled by default through the
USER_CLASSES variable from your
local.conf
file. Consequently, you do not have to do anything to
enable the class. However, if you want to disable the class, simply
remove “buildstats” from the USER_CLASSES list.
5.11 buildstats-summary
When inherited globally, prints statistics at the end of the build on sstate re-use. In order to function, this class requires the buildstats class be enabled.
5.12 cargo
The cargo class allows to compile Rust language programs using Cargo. Cargo is Rust’s package manager, allowing to fetch package dependencies and build your program.
Using this class makes it very easy to build Rust programs. All you need
is to use the SRC_URI variable to point to a source repository
which can be built by Cargo, typically one that was created by the
cargo new
command, containing a Cargo.toml
file, a Cargo.lock
file and a src
subdirectory.
If you want to build and package tests of the program, inherit the ptest-cargo class instead of cargo.
You will find an example (that show also how to handle possible git source dependencies) in the zvariant_3.12.0.bb recipe. Another example, with only crate dependencies, is the uutils-coreutils recipe, which was generated by the cargo-bitbake tool.
This class inherits the cargo_common class.
5.13 cargo_c
The cargo_c class can be inherited by a recipe to generate
a Rust library that can be called by C/C++ code. The recipe which inherits this
class has to only replace inherit cargo
by inherit cargo_c
.
See the rust-c-lib-example_git.bb example recipe.
5.14 cargo_common
The cargo_common class is an internal class that is not intended to be used directly.
An exception is the “rust” recipe, to build the Rust compiler and runtime library, which is built by Cargo but cannot use the cargo class. This is why this class was introduced.
5.15 cargo-update-recipe-crates
The cargo-update-recipe-crates class allows
recipe developers to update the list of Cargo crates in SRC_URI
by reading the Cargo.lock
file in the source tree.
To do so, create a recipe for your program, for example using devtool, make it inherit the cargo and cargo-update-recipe-crates and run:
bitbake -c update_crates recipe
This creates a recipe-crates.inc
file that you can include in your
recipe:
require ${BPN}-crates.inc
That’s also something you can achieve by using the cargo-bitbake tool.
5.16 ccache
The ccache class enables the C/C++ Compiler Cache for the build. This class is used to give a minor performance boost during the build.
See https://ccache.samba.org/ for information on the C/C++ Compiler
Cache, and the ccache.bbclass
file for details about how to enable this mechanism in your configuration
file, how to disable it for specific recipes, and how to share ccache
files between builds.
However, using the class can lead to unexpected side-effects. Thus, using this class is not recommended.
5.17 chrpath
The chrpath class is a wrapper around the “chrpath” utility, which
is used during the build process for nativesdk, cross, and
cross-canadian recipes to change RPATH
records within binaries
in order to make them relocatable.
5.18 cmake
The cmake class allows recipes to build software using the
CMake build system. You can use the
EXTRA_OECMAKE variable to specify additional configuration options to
pass to the cmake
command line.
By default, the cmake class uses
Ninja instead of GNU make for building, which
offers better build performance. If a recipe is broken with Ninja, then the
recipe can set the OECMAKE_GENERATOR variable to Unix Makefiles
to
use GNU make instead.
If you need to install custom CMake toolchain files supplied by the application
being built, you should install them (during do_install) to the
preferred CMake Module directory: ${D}${datadir}/cmake/modules/
.
5.19 cmake-qemu
The cmake-qemu class might be used instead of the
cmake class. In addition to the features provided by the
cmake class, the cmake-qemu class passes
the CMAKE_CROSSCOMPILING_EMULATOR
setting to cmake
. This allows to use
QEMU user-mode emulation for the execution of cross-compiled binaries on the
host machine. For more information about CMAKE_CROSSCOMPILING_EMULATOR
please refer to the related section of the CMake documentation.
Not all platforms are supported by QEMU. This class only works for machines with
qemu-usermode
in the Machine Features. Using QEMU user-mode therefore
involves a certain risk, which is also the reason why this feature is not part of
the main cmake class by default.
One use case is the execution of cross-compiled unit tests with CTest on the build
machine. If CMAKE_CROSSCOMPILING_EMULATOR
is configured:
cmake --build --target test
works transparently with QEMU user-mode.
If the CMake project is developed with this use case in mind this works very nicely.
This also applies to an IDE configured to use cmake-native
for cross-compiling.
5.20 cml1
The cml1 class provides basic support for the Linux kernel style build configuration system. “cml” stands for “Configuration Menu Language”, which originates from the Linux kernel but is also used in other projects such as U-Boot and BusyBox. It could have been called “kconfig” too.
5.21 compress_doc
Enables compression for manual and info pages. This class is intended to be inherited globally. The default compression mechanism is gz (gzip) but you can select an alternative mechanism by setting the DOC_COMPRESS variable.
5.22 copyleft_compliance
The copyleft_compliance class preserves source code for the purposes of license compliance. This class is an alternative to the archiver class and is still used by some users even though it has been deprecated in favor of the archiver class.
5.23 copyleft_filter
A class used by the archiver and
copyleft_compliance classes
for filtering licenses. The copyleft_filter
class is an internal
class and is not intended to be used directly.
5.24 core-image
The core-image class provides common definitions for the
core-image-*
image recipes, such as support for additional
IMAGE_FEATURES.
5.25 cpan*
The cpan* classes support Perl modules.
Recipes for Perl modules are simple. These recipes usually only need to point to the source’s archive and then inherit the proper class file. Building is split into two methods depending on which method the module authors used.
Modules that use old
Makefile.PL
-based build system requirecpan.bbclass
in their recipes.Modules that use
Build.PL
-based build system require usingcpan_build.bbclass
in their recipes.
Both build methods inherit the cpan-base class for basic Perl support.
5.26 create-spdx
The create-spdx class provides support for automatically creating SPDX SBOM documents based upon image and SDK contents.
This class is meant to be inherited globally from a configuration file:
INHERIT += "create-spdx"
The toplevel SPDX output file is generated in JSON format as a
IMAGE-MACHINE.spdx.json
file in tmp/deploy/images/MACHINE/
inside the
Build Directory. There are other related files in the same directory,
as well as in tmp/deploy/spdx
.
The exact behaviour of this class, and the amount of output can be controlled by the SPDX_PRETTY, SPDX_ARCHIVE_PACKAGED, SPDX_ARCHIVE_SOURCES and SPDX_INCLUDE_SOURCES variables.
See the description of these variables and the “Creating a Software Bill of Materials” section in the Yocto Project Development Manual for more details.
5.27 cross
The cross class provides support for the recipes that build the cross-compilation tools.
5.28 cross-canadian
The cross-canadian class provides support for the recipes that build the Canadian Cross-compilation tools for SDKs. See the “Cross-Development Toolchain Generation” section in the Yocto Project Overview and Concepts Manual for more discussion on these cross-compilation tools.
5.29 crosssdk
The crosssdk class provides support for the recipes that build the cross-compilation tools used for building SDKs. See the “Cross-Development Toolchain Generation” section in the Yocto Project Overview and Concepts Manual for more discussion on these cross-compilation tools.
5.30 cve-check
The cve-check class looks for known CVEs (Common Vulnerabilities and Exposures) while building with BitBake. This class is meant to be inherited globally from a configuration file:
INHERIT += "cve-check"
To filter out obsolete CVE database entries which are known not to impact software from Poky and OE-Core, add following line to the build configuration file:
include cve-extra-exclusions.inc
You can also look for vulnerabilities in specific packages by passing
-c cve_check
to BitBake.
After building the software with Bitbake, CVE check output reports are available in tmp/deploy/cve
and image specific summaries in tmp/deploy/images/*.cve
or tmp/deploy/images/*.json
files.
When building, the CVE checker will emit build time warnings for any detected
issues which are in the state Unpatched
, meaning that CVE issue seems to affect the software component
and version being compiled and no patches to address the issue are applied. Other states
for detected CVE issues are: Patched
meaning that a patch to address the issue is already
applied, and Ignored
meaning that the issue can be ignored.
The Patched
state of a CVE issue is detected from patch files with the format
CVE-ID.patch
, e.g. CVE-2019-20633.patch
, in the SRC_URI and using
CVE metadata of format CVE: CVE-ID
in the commit message of the patch file.
If the recipe adds CVE-ID
as flag of the CVE_STATUS variable with status
mapped to Ignored
, then the CVE state is reported as Ignored
:
CVE_STATUS[CVE-2020-15523] = "not-applicable-platform: Issue only applies on Windows"
If CVE check reports that a recipe contains false positives or false negatives, these may be fixed in recipes by adjusting the CVE product name using CVE_PRODUCT and CVE_VERSION variables. CVE_PRODUCT defaults to the plain recipe name BPN which can be adjusted to one or more CVE database vendor and product pairs using the syntax:
CVE_PRODUCT = "flex_project:flex"
where flex_project
is the CVE database vendor name and flex
is the product name. Similarly
if the default recipe version PV does not match the version numbers of the software component
in upstream releases or the CVE database, then the CVE_VERSION variable can be used to set the
CVE database compatible version number, for example:
CVE_VERSION = "2.39"
Any bugs or missing or incomplete information in the CVE database entries should be fixed in the CVE database via the NVD feedback form.
Users should note that security is a process, not a product, and thus also CVE checking, analyzing results, patching and updating the software should be done as a regular process. The data and assumptions required for CVE checker to reliably detect issues are frequently broken in various ways. These can only be detected by reviewing the details of the issues and iterating over the generated reports, and following what happens in other Linux distributions and in the greater open source community.
You will find some more details in the “Checking for Vulnerabilities” section in the Development Tasks Manual.
5.31 debian
The debian class renames output packages so that they follow the
Debian naming policy (i.e. glibc
becomes libc6
and
glibc-devel
becomes libc6-dev
.) Renaming includes the library
name and version as part of the package name.
If a recipe creates packages for multiple libraries (shared object files
of .so
type), use the LEAD_SONAME
variable in the recipe to specify the library on which to apply the
naming scheme.
5.32 deploy
The deploy class handles deploying files to the
DEPLOY_DIR_IMAGE directory. The main
function of this class is to allow the deploy step to be accelerated by
shared state. Recipes that inherit this class should define their own
do_deploy function to copy the files to be
deployed to DEPLOYDIR, and use addtask
to
add the task at the appropriate place, which is usually after
do_compile or
do_install. The class then takes care of
staging the files from DEPLOYDIR to DEPLOY_DIR_IMAGE.
5.33 devicetree
The devicetree class allows to build a recipe that compiles device tree source files that are not in the kernel tree.
The compilation of out-of-tree device tree sources is the same as the kernel
in-tree device tree compilation process. This includes the ability to include
sources from the kernel such as SoC dtsi
files as well as C header files,
such as gpio.h
.
The do_compile task will compile two kinds of files:
Regular device tree sources with a
.dts
extension.Device tree overlays, detected from the presence of the
/plugin/;
string in the file contents.
This class deploys the generated device tree binaries into
${
DEPLOY_DIR_IMAGE}/devicetree/
. This is similar to
what the kernel-devicetree class does, with the added
devicetree
subdirectory to avoid name clashes. Additionally, the device
trees are populated into the sysroot for access via the sysroot from within
other recipes.
By default, all device tree sources located in DT_FILES_PATH directory
are compiled. To select only particular sources, set DT_FILES to
a space-separated list of files (relative to DT_FILES_PATH). For
convenience, both .dts
and .dtb
extensions can be used.
An extra padding is appended to non-overlay device trees binaries. This can typically be used as extra space for adding extra properties at boot time. The padding size can be modified by setting DT_PADDING_SIZE to the desired size, in bytes.
See devicetree.bbclass sources for further variables controlling this class.
Here is an excerpt of an example recipes-kernel/linux/devicetree-acme.bb
recipe inheriting this class:
inherit devicetree
COMPATIBLE_MACHINE = "^mymachine$"
SRC_URI:mymachine = "file://mymachine.dts"
5.34 devshell
The devshell class adds the do_devshell task. Distribution policy dictates whether to include this class. See the “Using a Development Shell” section in the Yocto Project Development Tasks Manual for more information about using devshell.
5.35 devupstream
The devupstream class uses BBCLASSEXTEND to add a variant of the recipe that fetches from an alternative URI (e.g. Git) instead of a tarball. Here is an example:
BBCLASSEXTEND = "devupstream:target"
SRC_URI:class-devupstream = "git://git.example.com/example;branch=main"
SRCREV:class-devupstream = "abcd1234"
Adding the above statements to your recipe creates a variant that has
DEFAULT_PREFERENCE set to “-1”.
Consequently, you need to select the variant of the recipe to use it.
Any development-specific adjustments can be done by using the
class-devupstream
override. Here is an example:
DEPENDS:append:class-devupstream = " gperf-native"
do_configure:prepend:class-devupstream() {
touch ${S}/README
}
The class currently only supports creating a development variant of the target recipe, not native or nativesdk variants.
The BBCLASSEXTEND syntax (i.e. devupstream:target
) provides
support for native and nativesdk variants. Consequently, this
functionality can be added in a future release.
Support for other version control systems such as Subversion is limited
due to BitBake’s automatic fetch dependencies (e.g.
subversion-native
).
5.36 externalsrc
The externalsrc class supports building software from source code that is external to the OpenEmbedded build system. Building software from an external source tree means that the build system’s normal fetch, unpack, and patch process is not used.
By default, the OpenEmbedded build system uses the S and B variables to locate unpacked recipe source code and to build it, respectively. When your recipe inherits the externalsrc class, you use the EXTERNALSRC and EXTERNALSRC_BUILD variables to ultimately define S and B.
By default, this class expects the source code to support recipe builds that use the B variable to point to the directory in which the OpenEmbedded build system places the generated objects built from the recipes. By default, the B directory is set to the following, which is separate from the source directory (S):
${WORKDIR}/${BPN}-{PV}/
See these variables for more information: WORKDIR, BPN, and PV,
For more information on the externalsrc class, see the comments in
meta/classes/externalsrc.bbclass
in the Source Directory.
For information on how to use the externalsrc class, see the
“Building Software from an External Source”
section in the Yocto Project Development Tasks Manual.
5.37 extrausers
The extrausers class allows additional user and group configuration to be applied at the image level. Inheriting this class either globally or from an image recipe allows additional user and group operations to be performed using the EXTRA_USERS_PARAMS variable.
Note
The user and group operations added using the extrausers class are not tied to a specific recipe outside of the recipe for the image. Thus, the operations can be performed across the image as a whole. Use the useradd* class to add user and group configuration to a specific recipe.
Here is an example that uses this class in an image recipe:
inherit extrausers
EXTRA_USERS_PARAMS = "\
useradd -p '' tester; \
groupadd developers; \
userdel nobody; \
groupdel -g video; \
groupmod -g 1020 developers; \
usermod -s /bin/sh tester; \
"
Here is an example that adds two users named “tester-jim” and “tester-sue” and assigns passwords. First on host, create the (escaped) password hash:
printf "%q" $(mkpasswd -m sha256crypt tester01)
The resulting hash is set to a variable and used in useradd
command parameters:
inherit extrausers
PASSWD = "\$X\$ABC123\$A-Long-Hash"
EXTRA_USERS_PARAMS = "\
useradd -p '${PASSWD}' tester-jim; \
useradd -p '${PASSWD}' tester-sue; \
"
Finally, here is an example that sets the root password:
inherit extrausers
EXTRA_USERS_PARAMS = "\
usermod -p '${PASSWD}' root; \
"
Note
From a security perspective, hardcoding a default password is not generally a good idea or even legal in some jurisdictions. It is recommended that you do not do this if you are building a production image.
5.38 features_check
The features_check class allows individual recipes to check for required and conflicting DISTRO_FEATURES, MACHINE_FEATURES or COMBINED_FEATURES.
This class provides support for the following variables:
REQUIRED_MACHINE_FEATURES
CONFLICT_MACHINE_FEATURES
ANY_OF_MACHINE_FEATURES
REQUIRED_COMBINED_FEATURES
CONFLICT_COMBINED_FEATURES
ANY_OF_COMBINED_FEATURES
If any conditions specified in the recipe using the above variables are not met, the recipe will be skipped, and if the build system attempts to build the recipe then an error will be triggered.
5.39 fontcache
The fontcache class generates the proper post-install and
post-remove (postinst and postrm) scriptlets for font packages. These
scriptlets call fc-cache
(part of Fontconfig
) to add the fonts
to the font information cache. Since the cache files are
architecture-specific, fc-cache
runs using QEMU if the postinst
scriptlets need to be run on the build host during image creation.
If the fonts being installed are in packages other than the main package, set FONT_PACKAGES to specify the packages containing the fonts.
5.40 fs-uuid
The fs-uuid class extracts UUID from
${
ROOTFS}
, which must have been built
by the time that this function gets called. The fs-uuid class only
works on ext
file systems and depends on tune2fs
.
5.41 gconf
The gconf class provides common functionality for recipes that need
to install GConf schemas. The schemas will be put into a separate
package (${
PN}-gconf
) that is created
automatically when this class is inherited. This package uses the
appropriate post-install and post-remove (postinst/postrm) scriptlets to
register and unregister the schemas in the target image.
5.42 gettext
The gettext class provides support for building
software that uses the GNU gettext
internationalization and localization
system. All recipes building software that use gettext
should inherit this
class.
5.43 github-releases
For recipes that fetch release tarballs from github, the github-releases
class sets up a standard way for checking available upstream versions
(to support devtool upgrade
and the Automated Upgrade Helper (AUH)).
To use it, add “github-releases” to the inherit line in the recipe,
and if the default value of GITHUB_BASE_URI is not suitable,
then set your own value in the recipe. You should then use ${GITHUB_BASE_URI}
in the value you set for SRC_URI within the recipe.
5.44 gnomebase
The gnomebase class is the base class for recipes that build software from the GNOME stack. This class sets SRC_URI to download the source from the GNOME mirrors as well as extending FILES with the typical GNOME installation paths.
5.45 go
The go class supports building Go programs. The behavior of this class is controlled by the mandatory GO_IMPORT variable, and by the optional GO_INSTALL and GO_INSTALL_FILTEROUT ones.
To build a Go program with the Yocto Project, you can use the go-helloworld_0.1.bb recipe as an example.
5.46 go-mod
The go-mod class allows to use Go modules, and inherits the go class.
See the associated GO_WORKDIR variable.
5.47 go-vendor
The go-vendor class implements support for offline builds, also known as Go vendoring. In such a scenario, the module dependencias are downloaded during the do_fetch task rather than when modules are imported, thus being coherent with Yocto’s concept of fetching every source beforehand.
The dependencies are unpacked into the modules’ vendor
directory, where a
manifest file is generated.
5.48 gobject-introspection
Provides support for recipes building software that supports GObject introspection. This functionality is only enabled if the “gobject-introspection-data” feature is in DISTRO_FEATURES as well as “qemu-usermode” being in MACHINE_FEATURES.
Note
This functionality is backfilled by default and, if not applicable, should be disabled through DISTRO_FEATURES_BACKFILL_CONSIDERED or MACHINE_FEATURES_BACKFILL_CONSIDERED, respectively.
5.49 grub-efi
The grub-efi class provides grub-efi
-specific functions for
building bootable images.
This class supports several variables:
INITRD: Indicates list of filesystem images to concatenate and use as an initial RAM disk (initrd) (optional).
ROOTFS: Indicates a filesystem image to include as the root filesystem (optional).
GRUB_GFXSERIAL: Set this to “1” to have graphics and serial in the boot menu.
LABELS: A list of targets for the automatic configuration.
APPEND: An override list of append strings for each
LABEL
.GRUB_OPTS: Additional options to add to the configuration (optional). Options are delimited using semi-colon characters (
;
).GRUB_TIMEOUT: Timeout before executing the default
LABEL
(optional).
5.50 gsettings
The gsettings class provides common functionality for recipes that need to install GSettings (glib) schemas. The schemas are assumed to be part of the main package. Appropriate post-install and post-remove (postinst/postrm) scriptlets are added to register and unregister the schemas in the target image.
5.51 gtk-doc
The gtk-doc class is a helper class to pull in the appropriate
gtk-doc
dependencies and disable gtk-doc
.
5.52 gtk-icon-cache
The gtk-icon-cache class generates the proper post-install and
post-remove (postinst/postrm) scriptlets for packages that use GTK+ and
install icons. These scriptlets call gtk-update-icon-cache
to add
the fonts to GTK+’s icon cache. Since the cache files are
architecture-specific, gtk-update-icon-cache
is run using QEMU if
the postinst scriptlets need to be run on the build host during image
creation.
5.53 gtk-immodules-cache
The gtk-immodules-cache class generates the proper post-install and
post-remove (postinst/postrm) scriptlets for packages that install GTK+
input method modules for virtual keyboards. These scriptlets call
gtk-update-icon-cache
to add the input method modules to the cache.
Since the cache files are architecture-specific,
gtk-update-icon-cache
is run using QEMU if the postinst scriptlets
need to be run on the build host during image creation.
If the input method modules being installed are in packages other than the main package, set GTKIMMODULES_PACKAGES to specify the packages containing the modules.
5.54 gzipnative
The gzipnative class enables the use of different native versions of
gzip
and pigz
rather than the versions of these tools from the
build host.
5.55 icecc
The icecc class supports Icecream, which facilitates taking compile jobs and distributing them among remote machines.
The class stages directories with symlinks from gcc
and g++
to
icecc
, for both native and cross compilers. Depending on each
configure or compile, the OpenEmbedded build system adds the directories
at the head of the PATH
list and then sets the ICECC_CXX
and
ICECC_CC
variables, which are the paths to the g++
and gcc
compilers, respectively.
For the cross compiler, the class creates a tar.gz
file that
contains the Yocto Project toolchain and sets ICECC_VERSION
, which
is the version of the cross-compiler used in the cross-development
toolchain, accordingly.
The class handles all three different compile stages (i.e native,
cross-kernel and target) and creates the necessary environment
tar.gz
file to be used by the remote machines. The class also
supports SDK generation.
If ICECC_PATH is not set in your
local.conf
file, then the class tries to locate the icecc
binary
using which
. If ICECC_ENV_EXEC is set
in your local.conf
file, the variable should point to the
icecc-create-env
script provided by the user. If you do not point to
a user-provided script, the build system uses the default script
provided by the recipe icecc-create-env_0.1.bb.
Note
This script is a modified version and not the one that comes with
icecream
.
If you do not want the Icecream distributed compile support to apply to
specific recipes or classes, you can ask them to be ignored by Icecream
by listing the recipes and classes using the
ICECC_RECIPE_DISABLE and
ICECC_CLASS_DISABLE variables,
respectively, in your local.conf
file. Doing so causes the
OpenEmbedded build system to handle these compilations locally.
Additionally, you can list recipes using the
ICECC_RECIPE_ENABLE variable in
your local.conf
file to force icecc
to be enabled for recipes
using an empty PARALLEL_MAKE variable.
Inheriting the icecc class changes all sstate signatures. Consequently, if a development team has a dedicated build system that populates SSTATE_MIRRORS and they want to reuse sstate from SSTATE_MIRRORS, then all developers and the build system need to either inherit the icecc class or nobody should.
At the distribution level, you can inherit the icecc class to be sure that all builders start with the same sstate signatures. After inheriting the class, you can then disable the feature by setting the ICECC_DISABLED variable to “1” as follows:
INHERIT_DISTRO:append = " icecc"
ICECC_DISABLED ??= "1"
This practice
makes sure everyone is using the same signatures but also requires
individuals that do want to use Icecream to enable the feature
individually as follows in your local.conf
file:
ICECC_DISABLED = ""
5.56 image
The image class helps support creating images in different formats.
First, the root filesystem is created from packages using one of the
rootfs*.bbclass
files (depending on the package format used) and
then one or more image files are created.
The IMAGE_FSTYPES variable controls the types of images to generate.
The IMAGE_INSTALL variable controls the list of packages to install into the image.
For information on customizing images, see the “Customizing Images” section in the Yocto Project Development Tasks Manual. For information on how images are created, see the “Images” section in the Yocto Project Overview and Concepts Manual.
5.57 image-buildinfo
The image-buildinfo class writes a plain text file containing
build information to the target filesystem at ${sysconfdir}/buildinfo
by default (as specified by IMAGE_BUILDINFO_FILE).
This can be useful for manually determining the origin of any given
image. It writes out two sections:
Build Configuration: a list of variables and their values (specified by IMAGE_BUILDINFO_VARS, which defaults to DISTRO and DISTRO_VERSION)
Layer Revisions: the revisions of all of the layers used in the build.
Additionally, when building an SDK it will write the same contents
to /buildinfo
by default (as specified by
SDK_BUILDINFO_FILE).
5.58 image_types
The image_types class defines all of the standard image output types that you can enable through the IMAGE_FSTYPES variable. You can use this class as a reference on how to add support for custom image output types.
By default, the image class automatically
enables the image_types class. The image class uses the
IMGCLASSES
variable as follows:
IMGCLASSES = "rootfs_${IMAGE_PKGTYPE} image_types ${IMAGE_CLASSES}"
# Only Linux SDKs support populate_sdk_ext, fall back to populate_sdk_base
# in the non-Linux SDK_OS case, such as mingw32
inherit populate_sdk_base
IMGCLASSES += "${@['', 'populate_sdk_ext']['linux' in d.getVar("SDK_OS")]}"
IMGCLASSES += "${@bb.utils.contains_any('IMAGE_FSTYPES', 'live iso hddimg', 'image-live', '', d)}"
IMGCLASSES += "${@bb.utils.contains('IMAGE_FSTYPES', 'container', 'image-container', '', d)}"
IMGCLASSES += "image_types_wic"
IMGCLASSES += "rootfs-postcommands"
IMGCLASSES += "image-postinst-intercepts"
IMGCLASSES += "overlayfs-etc"
inherit_defer ${IMGCLASSES}
The image_types class also handles conversion and compression of images.
Note
To build a VMware VMDK image, you need to add “wic.vmdk” to IMAGE_FSTYPES. This would also be similar for Virtual Box Virtual Disk Image (“vdi”) and QEMU Copy On Write Version 2 (“qcow2”) images.
5.59 image-live
This class controls building “live” (i.e. HDDIMG and ISO) images. Live images contain syslinux for legacy booting, as well as the bootloader specified by EFI_PROVIDER if MACHINE_FEATURES contains “efi”.
Normally, you do not use this class directly. Instead, you add “live” to IMAGE_FSTYPES.
5.60 insane
The insane class adds a step to the package generation process so that output quality assurance checks are generated by the OpenEmbedded build system. A range of checks are performed that check the build’s output for common problems that show up during runtime. Distribution policy usually dictates whether to include this class.
You can configure the sanity checks so that specific test failures either raise a warning or an error message. Typically, failures for new tests generate a warning. Subsequent failures for the same test would then generate an error message once the metadata is in a known and good condition. See the “QA Error and Warning Messages” Chapter for a list of all the warning and error messages you might encounter using a default configuration.
Use the WARN_QA and
ERROR_QA variables to control the behavior of
these checks at the global level (i.e. in your custom distro
configuration). However, to skip one or more checks in recipes, you
should use INSANE_SKIP. For example, to skip
the check for symbolic link .so
files in the main package of a
recipe, add the following to the recipe. You need to realize that the
package name override, in this example ${PN}
, must be used:
INSANE_SKIP:${PN} += "dev-so"
Please keep in mind that the QA checks are meant to detect real or potential problems in the packaged output. So exercise caution when disabling these checks.
The tests you can list with the WARN_QA and ERROR_QA variables are:
already-stripped:
Checks that produced binaries have not already been stripped prior to the build system extracting debug symbols. It is common for upstream software projects to default to stripping debug symbols for output binaries. In order for debugging to work on the target using-dbg
packages, this stripping must be disabled.arch:
Checks the Executable and Linkable Format (ELF) type, bit size, and endianness of any binaries to ensure they match the target architecture. This test fails if any binaries do not match the type since there would be an incompatibility. The test could indicate that the wrong compiler or compiler options have been used. Sometimes software, like bootloaders, might need to bypass this check.buildpaths:
Checks for paths to locations on the build host inside the output files. Not only can these leak information about the build environment, they also hinder binary reproducibility.build-deps:
Determines if a build-time dependency that is specified through DEPENDS, explicit RDEPENDS, or task-level dependencies exists to match any runtime dependency. This determination is particularly useful to discover where runtime dependencies are detected and added during packaging. If no explicit dependency has been specified within the metadata, at the packaging stage it is too late to ensure that the dependency is built, and thus you can end up with an error when the package is installed into the image during the do_rootfs task because the auto-detected dependency was not satisfied. An example of this would be where the update-rc.d class automatically adds a dependency on theinitscripts-functions
package to packages that install an initscript that refers to/etc/init.d/functions
. The recipe should really have an explicit RDEPENDS for the package in question oninitscripts-functions
so that the OpenEmbedded build system is able to ensure that theinitscripts
recipe is actually built and thus theinitscripts-functions
package is made available.configure-gettext:
Checks that if a recipe is building something that uses automake and the automake files contain anAM_GNU_GETTEXT
directive, that the recipe also inherits the gettext class to ensure that gettext is available during the build.compile-host-path:
Checks the do_compile log for indications that paths to locations on the build host were used. Using such paths might result in host contamination of the build output.cve_status_not_in_db:
Checks for each component if CVEs that are ignored via CVE_STATUS, that those are (still) reported for this component in the NIST database. If not, a warning is printed. This check is disabled by default.debug-deps:
Checks that all packages except-dbg
packages do not depend on-dbg
packages, which would cause a packaging bug.debug-files:
Checks for.debug
directories in anything but the-dbg
package. The debug files should all be in the-dbg
package. Thus, anything packaged elsewhere is incorrect packaging.dep-cmp:
Checks for invalid version comparison statements in runtime dependency relationships between packages (i.e. in RDEPENDS, RRECOMMENDS, RSUGGESTS, RPROVIDES, RREPLACES, and RCONFLICTS variable values). Any invalid comparisons might trigger failures or undesirable behavior when passed to the package manager.desktop:
Runs thedesktop-file-validate
program against any.desktop
files to validate their contents against the specification for.desktop
files.dev-deps:
Checks that all packages except-dev
or-staticdev
packages do not depend on-dev
packages, which would be a packaging bug.dev-so:
Checks that the.so
symbolic links are in the-dev
package and not in any of the other packages. In general, these symlinks are only useful for development purposes. Thus, the-dev
package is the correct location for them. In very rare cases, such as dynamically loaded modules, these symlinks are needed instead in the main package.empty-dirs:
Checks that packages are not installing files to directories that are normally expected to be empty (such as/tmp
) The list of directories that are checked is specified by the QA_EMPTY_DIRS variable.file-rdeps:
Checks that file-level dependencies identified by the OpenEmbedded build system at packaging time are satisfied. For example, a shell script might start with the line#!/bin/bash
. This line would translate to a file dependency on/bin/bash
. Of the three package managers that the OpenEmbedded build system supports, only RPM directly handles file-level dependencies, resolving them automatically to packages providing the files. However, the lack of that functionality in the other two package managers does not mean the dependencies do not still need resolving. This QA check attempts to ensure that explicitly declared RDEPENDS exist to handle any file-level dependency detected in packaged files.files-invalid:
Checks for FILES variable values that contain “//”, which is invalid.host-user-contaminated:
Checks that no package produced by the recipe contains any files outside of/home
with a user or group ID that matches the user running BitBake. A match usually indicates that the files are being installed with an incorrect UID/GID, since target IDs are independent from host IDs. For additional information, see the section describing the do_install task.incompatible-license:
Report when packages are excluded from being created due to being marked with a license that is in INCOMPATIBLE_LICENSE.install-host-path:
Checks the do_install log for indications that paths to locations on the build host were used. Using such paths might result in host contamination of the build output.installed-vs-shipped:
Reports when files have been installed within do_install but have not been included in any package by way of the FILES variable. Files that do not appear in any package cannot be present in an image later on in the build process. Ideally, all installed files should be packaged or not installed at all. These files can be deleted at the end of do_install if the files are not needed in any package.invalid-chars:
Checks that the recipe metadata variables DESCRIPTION, SUMMARY, LICENSE, and SECTION do not contain non-UTF-8 characters. Some package managers do not support such characters.invalid-packageconfig:
Checks that no undefined features are being added to PACKAGECONFIG. For example, any name “foo” for which the following form does not exist:PACKAGECONFIG[foo] = "..."
la:
Checks.la
files for any TMPDIR paths. Any.la
file containing these paths is incorrect sincelibtool
adds the correct sysroot prefix when using the files automatically itself.ldflags:
Ensures that the binaries were linked with the LDFLAGS options provided by the build system. If this test fails, check that the LDFLAGS variable is being passed to the linker command.libdir:
Checks for libraries being installed into incorrect (possibly hardcoded) installation paths. For example, this test will catch recipes that install/lib/bar.so
when${base_libdir}
is “lib32”. Another example is when recipes install/usr/lib64/foo.so
when${libdir}
is “/usr/lib”.libexec:
Checks if a package contains files in/usr/libexec
. This check is not performed if thelibexecdir
variable has been set explicitly to/usr/libexec
.mime:
Check that if a package contains mime type files (.xml
files in${datadir}/mime/packages
) that the recipe also inherits the mime class in order to ensure that these get properly installed.mime-xdg:
Checks that if a package contains a .desktop file with a ‘MimeType’ key present, that the recipe inherits the mime-xdg class that is required in order for that to be activated.missing-update-alternatives:
Check that if a recipe sets the ALTERNATIVE variable that the recipe also inherits update-alternatives such that the alternative will be correctly set up.packages-list:
Checks for the same package being listed multiple times through the PACKAGES variable value. Installing the package in this manner can cause errors during packaging.patch-fuzz:
Checks for fuzz in patch files that may allow them to apply incorrectly if the underlying code changes.patch-status-core:
Checks that the Upstream-Status is specified and valid in the headers of patches for recipes in the OE-Core layer.patch-status-noncore:
Checks that the Upstream-Status is specified and valid in the headers of patches for recipes in layers other than OE-Core.perllocalpod:
Checks forperllocal.pod
being erroneously installed and packaged by a recipe.perm-config:
Reports lines infs-perms.txt
that have an invalid format.perm-line:
Reports lines infs-perms.txt
that have an invalid format.perm-link:
Reports lines infs-perms.txt
that specify ‘link’ where the specified target already exists.perms:
Currently, this check is unused but reserved.pkgconfig:
Checks.pc
files for any TMPDIR/WORKDIR paths. Any.pc
file containing these paths is incorrect sincepkg-config
itself adds the correct sysroot prefix when the files are accessed.pkgname:
Checks that all packages in PACKAGES have names that do not contain invalid characters (i.e. characters other than 0-9, a-z, ., +, and -).pkgv-undefined:
Checks to see if the PKGV variable is undefined during do_package.pkgvarcheck:
Checks through the variables RDEPENDS, RRECOMMENDS, RSUGGESTS, RCONFLICTS, RPROVIDES, RREPLACES, FILES, ALLOW_EMPTY,pkg_preinst
,pkg_postinst
,pkg_prerm
andpkg_postrm
, and reports if there are variable sets that are not package-specific. Using these variables without a package suffix is bad practice, and might unnecessarily complicate dependencies of other packages within the same recipe or have other unintended consequences.pn-overrides:
Checks that a recipe does not have a name (PN) value that appears in OVERRIDES. If a recipe is named such that its PN value matches something already in OVERRIDES (e.g. PN happens to be the same as MACHINE or DISTRO), it can have unexpected consequences. For example, assignments such asFILES:${PN} = "xyz"
effectively turn intoFILES = "xyz"
.rpaths:
Checks for rpaths in the binaries that contain build system paths such as TMPDIR. If this test fails, bad-rpath
options are being passed to the linker commands and your binaries have potential security issues.shebang-size:
Check that the shebang line (#!
in the first line) in a packaged script is not longer than 128 characters, which can cause an error at runtime depending on the operating system.split-strip:
Reports that splitting or stripping debug symbols from binaries has failed.staticdev:
Checks for static library files (*.a
) in non-staticdev
packages.src-uri-bad:
Checks that the SRC_URI value set by a recipe does not contain a reference to${PN}
(instead of the correct${BPN}
) nor refers to unstable Github archive tarballs.symlink-to-sysroot:
Checks for symlinks in packages that point into TMPDIR on the host. Such symlinks will work on the host, but are clearly invalid when running on the target.textrel:
Checks for ELF binaries that contain relocations in their.text
sections, which can result in a performance impact at runtime. See the explanation for theELF binary
message in “QA Error and Warning Messages” for more information regarding runtime performance issues.unhandled-features-check:
check that if one of the variables that the features_check class supports (e.g. REQUIRED_DISTRO_FEATURES) is set by a recipe, then the recipe also inherits features_check in order for the requirement to actually work.unimplemented-ptest:
Checks that ptests are implemented for upstream tests.unlisted-pkg-lics:
Checks that all declared licenses applying for a package are also declared on the recipe level (i.e. any license inLICENSE:*
should appear in LICENSE).useless-rpaths:
Checks for dynamic library load paths (rpaths) in the binaries that by default on a standard system are searched by the linker (e.g./lib
and/usr/lib
). While these paths will not cause any breakage, they do waste space and are unnecessary.usrmerge:
Ifusrmerge
is in DISTRO_FEATURES, this check will ensure that no package installs files to root (/bin
,/sbin
,/lib
,/lib64
) directories.var-undefined:
Reports when variables fundamental to packaging (i.e. WORKDIR, DEPLOY_DIR, D, PN, and PKGD) are undefined during do_package.version-going-backwards:
If the buildhistory class is enabled, reports when a package being written out has a lower version than the previously written package under the same name. If you are placing output packages into a feed and upgrading packages on a target system using that feed, the version of a package going backwards can result in the target system not correctly upgrading to the “new” version of the package.Note
This is only relevant when you are using runtime package management on your target system.
xorg-driver-abi:
Checks that all packages containing Xorg drivers have ABI dependencies. Thexserver-xorg
recipe provides driver ABI names. All drivers should depend on the ABI versions that they have been built against. Driver recipes that includexorg-driver-input.inc
orxorg-driver-video.inc
will automatically get these versions. Consequently, you should only need to explicitly add dependencies to binary driver recipes.
5.61 kernel
The kernel class handles building Linux kernels. The class contains code to build all kernel trees. All needed headers are staged into the STAGING_KERNEL_DIR directory to allow out-of-tree module builds using the module class.
If a file named defconfig
is listed in SRC_URI, then by default
do_configure copies it as .config
in the build directory,
so it is automatically used as the kernel configuration for the build. This
copy is not performed in case .config
already exists there: this allows
recipes to produce a configuration by other means in
do_configure:prepend
.
Each built kernel module is packaged separately and inter-module
dependencies are created by parsing the modinfo
output. If all modules
are required, then installing the kernel-modules
package installs all
packages with modules and various other kernel packages such as
kernel-vmlinux
.
The kernel class contains logic that allows you to embed an initial RAM filesystem (Initramfs) image when you build the kernel image. For information on how to build an Initramfs, see the “Building an Initial RAM Filesystem (Initramfs) Image” section in the Yocto Project Development Tasks Manual.
Various other classes are used by the kernel and module classes internally including the kernel-arch, module-base, and linux-kernel-base classes.
5.62 kernel-arch
The kernel-arch class sets the ARCH
environment variable for
Linux kernel compilation (including modules).
5.63 kernel-devicetree
The kernel-devicetree class, which is inherited by the kernel class, supports device tree generation.
Its behavior is mainly controlled by the following variables:
KERNEL_DEVICETREE_BUNDLE: whether to bundle the kernel and device tree
KERNEL_DTBDEST: directory where to install DTB files
KERNEL_DTBVENDORED: whether to keep vendor subdirectories
KERNEL_DTC_FLAGS: flags for
dtc
, the Device Tree CompilerKERNEL_PACKAGE_NAME: base name of the kernel packages
5.64 kernel-fitimage
The kernel-fitimage class provides support to pack a kernel image, device trees, a U-boot script, an Initramfs bundle and a RAM disk into a single FIT image. In theory, a FIT image can support any number of kernels, U-boot scripts, Initramfs bundles, RAM disks and device-trees. However, kernel-fitimage currently only supports limited usecases: just one kernel image, an optional U-boot script, an optional Initramfs bundle, an optional RAM disk, and any number of device trees.
To create a FIT image, it is required that KERNEL_CLASSES is set to include “kernel-fitimage” and one of KERNEL_IMAGETYPE, KERNEL_ALT_IMAGETYPE or KERNEL_IMAGETYPES to include “fitImage”.
The options for the device tree compiler passed to mkimage -D
when creating the FIT image are specified using the
UBOOT_MKIMAGE_DTCOPTS variable.
Only a single kernel can be added to the FIT image created by kernel-fitimage and the kernel image in FIT is mandatory. The address where the kernel image is to be loaded by U-Boot is specified by UBOOT_LOADADDRESS and the entrypoint by UBOOT_ENTRYPOINT. Setting FIT_ADDRESS_CELLS to “2” is necessary if such addresses are 64 bit ones.
Multiple device trees can be added to the FIT image created by kernel-fitimage and the device tree is optional. The address where the device tree is to be loaded by U-Boot is specified by UBOOT_DTBO_LOADADDRESS for device tree overlays and by UBOOT_DTB_LOADADDRESS for device tree binaries.
Only a single RAM disk can be added to the FIT image created by kernel-fitimage and the RAM disk in FIT is optional. The address where the RAM disk image is to be loaded by U-Boot is specified by UBOOT_RD_LOADADDRESS and the entrypoint by UBOOT_RD_ENTRYPOINT. The ramdisk is added to the FIT image when INITRAMFS_IMAGE is specified and requires that INITRAMFS_IMAGE_BUNDLE is not set to 1.
Only a single Initramfs bundle can be added to the FIT image created by kernel-fitimage and the Initramfs bundle in FIT is optional. In case of Initramfs, the kernel is configured to be bundled with the root filesystem in the same binary (example: zImage-initramfs-MACHINE.bin). When the kernel is copied to RAM and executed, it unpacks the Initramfs root filesystem. The Initramfs bundle can be enabled when INITRAMFS_IMAGE is specified and requires that INITRAMFS_IMAGE_BUNDLE is set to 1. The address where the Initramfs bundle is to be loaded by U-boot is specified by UBOOT_LOADADDRESS and the entrypoint by UBOOT_ENTRYPOINT.
Only a single U-boot boot script can be added to the FIT image created by kernel-fitimage and the boot script is optional. The boot script is specified in the ITS file as a text file containing U-boot commands. When using a boot script the user should configure the U-boot do_install task to copy the script to sysroot. So the script can be included in the FIT image by the kernel-fitimage class. At run-time, U-boot CONFIG_BOOTCOMMAND define can be configured to load the boot script from the FIT image and execute it.
The FIT image generated by the kernel-fitimage class is signed when the variables UBOOT_SIGN_ENABLE, UBOOT_MKIMAGE_DTCOPTS, UBOOT_SIGN_KEYDIR and UBOOT_SIGN_KEYNAME are set appropriately. The default values used for FIT_HASH_ALG and FIT_SIGN_ALG in kernel-fitimage are “sha256” and “rsa2048” respectively. The keys for signing the FIT image can be generated using the kernel-fitimage class when both FIT_GENERATE_KEYS and UBOOT_SIGN_ENABLE are set to “1”.
5.65 kernel-grub
The kernel-grub class updates the boot area and the boot menu with the kernel as the priority boot mechanism while installing a RPM to update the kernel on a deployed target.
5.66 kernel-module-split
The kernel-module-split class provides common functionality for splitting Linux kernel modules into separate packages.
5.67 kernel-uboot
The kernel-uboot class provides support for building from vmlinux-style kernel sources.
5.68 kernel-uimage
The kernel-uimage class provides support to pack uImage.
5.69 kernel-yocto
The kernel-yocto class provides common functionality for building from linux-yocto style kernel source repositories.
5.70 kernelsrc
The kernelsrc class sets the Linux kernel source and version.
5.71 lib_package
The lib_package class supports recipes that build libraries and
produce executable binaries, where those binaries should not be
installed by default along with the library. Instead, the binaries are
added to a separate ${
PN}-bin
package to
make their installation optional.
5.72 libc*
The libc* classes support recipes that build packages with libc
:
The libc-common class provides common support for building with
libc
.The libc-package class supports packaging up
glibc
andeglibc
.
5.73 license
The license class provides license manifest creation and license exclusion. This class is enabled by default using the default value for the INHERIT_DISTRO variable.
5.74 linux-kernel-base
The linux-kernel-base class provides common functionality for recipes that build out of the Linux kernel source tree. These builds goes beyond the kernel itself. For example, the Perf recipe also inherits this class.
5.75 linuxloader
Provides the function linuxloader()
, which gives the value of the
dynamic loader/linker provided on the platform. This value is used by a
number of other classes.
5.76 logging
The logging class provides the standard shell functions used to log
messages for various BitBake severity levels (i.e. bbplain
,
bbnote
, bbwarn
, bberror
, bbfatal
, and bbdebug
).
This class is enabled by default since it is inherited by the base class.
5.77 meson
The meson class allows to create recipes that build software
using the Meson build system. You can use the
MESON_BUILDTYPE, MESON_TARGET and EXTRA_OEMESON
variables to specify additional configuration options to be passed using the
meson
command line.
5.78 metadata_scm
The metadata_scm class provides functionality for querying the branch and revision of a Source Code Manager (SCM) repository.
The base class uses this class to print the revisions of each layer before starting every build. The metadata_scm class is enabled by default because it is inherited by the base class.
5.79 migrate_localcount
The migrate_localcount class verifies a recipe’s localcount data and increments it appropriately.
5.80 mime
The mime class generates the proper post-install and post-remove
(postinst/postrm) scriptlets for packages that install MIME type files.
These scriptlets call update-mime-database
to add the MIME types to
the shared database.
5.81 mime-xdg
The mime-xdg class generates the proper
post-install and post-remove (postinst/postrm) scriptlets for packages
that install .desktop
files containing MimeType
entries.
These scriptlets call update-desktop-database
to add the MIME types
to the database of MIME types handled by desktop files.
Thanks to this class, when users open a file through a file browser on recently created images, they don’t have to choose the application to open the file from the pool of all known applications, even the ones that cannot open the selected file.
If you have recipes installing their .desktop
files as absolute
symbolic links, the detection of such files cannot be done by the current
implementation of this class. In this case, you have to add the corresponding
package names to the MIME_XDG_PACKAGES variable.
5.82 mirrors
The mirrors class sets up some standard MIRRORS entries for source code mirrors. These mirrors provide a fall-back path in case the upstream source specified in SRC_URI within recipes is unavailable.
This class is enabled by default since it is inherited by the base class.
5.83 module
The module class provides support for building out-of-tree Linux kernel modules. The class inherits the module-base and kernel-module-split classes, and implements the do_compile and do_install tasks. The class provides everything needed to build and package a kernel module.
For general information on out-of-tree Linux kernel modules, see the “Incorporating Out-of-Tree Modules” section in the Yocto Project Linux Kernel Development Manual.
5.84 module-base
The module-base class provides the base functionality for building Linux kernel modules. Typically, a recipe that builds software that includes one or more kernel modules and has its own means of building the module inherits this class as opposed to inheriting the module class.
5.85 multilib*
The multilib* classes provide support for building libraries with different target optimizations or target architectures and installing them side-by-side in the same image.
For more information on using the Multilib feature, see the “Combining Multiple Versions of Library Files into One Image” section in the Yocto Project Development Tasks Manual.
5.86 native
The native class provides common functionality for recipes that build tools to run on the Build Host (i.e. tools that use the compiler or other tools from the build host).
You can create a recipe that builds tools that run natively on the host a couple different ways:
Create a
myrecipe-native.bb
recipe that inherits the native class. If you use this method, you must order the inherit statement in the recipe after all other inherit statements so that the native class is inherited last.Note
When creating a recipe this way, the recipe name must follow this naming convention:
myrecipe-native.bb
Not using this naming convention can lead to subtle problems caused by existing code that depends on that naming convention.
Create or modify a target recipe that contains the following:
BBCLASSEXTEND = "native"
Inside the recipe, use
:class-native
and:class-target
overrides to specify any functionality specific to the respective native or target case.
Although applied differently, the native class is used with both methods. The advantage of the second method is that you do not need to have two separate recipes (assuming you need both) for native and target. All common parts of the recipe are automatically shared.
5.87 nativesdk
The nativesdk class provides common functionality for recipes that wish to build tools to run as part of an SDK (i.e. tools that run on SDKMACHINE).
You can create a recipe that builds tools that run on the SDK machine a couple different ways:
Create a
nativesdk-myrecipe.bb
recipe that inherits the nativesdk class. If you use this method, you must order the inherit statement in the recipe after all other inherit statements so that the nativesdk class is inherited last.Create a nativesdk variant of any recipe by adding the following:
BBCLASSEXTEND = "nativesdk"
Inside the recipe, use
:class-nativesdk
and:class-target
overrides to specify any functionality specific to the respective SDK machine or target case.
Note
When creating a recipe, you must follow this naming convention:
nativesdk-myrecipe.bb
Not doing so can lead to subtle problems because there is code that depends on the naming convention.
Although applied differently, the nativesdk class is used with both methods. The advantage of the second method is that you do not need to have two separate recipes (assuming you need both) for the SDK machine and the target. All common parts of the recipe are automatically shared.
5.88 nopackages
Disables packaging tasks for those recipes and classes where packaging is not needed.
5.89 npm
Provides support for building Node.js software fetched using the node package manager (NPM).
Note
Currently, recipes inheriting this class must use the npm://
fetcher to have dependencies fetched and packaged automatically.
For information on how to create NPM packages, see the “Creating Node Package Manager (NPM) Packages” section in the Yocto Project Development Tasks Manual.
5.90 oelint
The oelint class is an obsolete lint checking tool available in
meta/classes
in the Source Directory.
There are some classes that could be generally useful in OE-Core but are never actually used within OE-Core itself. The oelint class is one such example. However, being aware of this class can reduce the proliferation of different versions of similar classes across multiple layers.
5.91 overlayfs
It’s often desired in Embedded System design to have a read-only root filesystem.
But a lot of different applications might want to have read-write access to
some parts of a filesystem. It can be especially useful when your update mechanism
overwrites the whole root filesystem, but you may want your application data to be preserved
between updates. The overlayfs class provides a way
to achieve that by means of overlayfs
and at the same time keeping the base
root filesystem read-only.
To use this class, set a mount point for a partition overlayfs
is going to use as upper
layer in your machine configuration. The underlying file system can be anything that
is supported by overlayfs
. This has to be done in your machine configuration:
OVERLAYFS_MOUNT_POINT[data] = "/data"
Note
QA checks fail to catch file existence if you redefine this variable in your recipe!
Only the existence of the systemd mount unit file is checked, not its contents.
To get more details on
overlayfs
, its internals and supported operations, please refer to the official documentation of the Linux kernel.
The class assumes you have a data.mount
systemd unit defined elsewhere in your BSP
(e.g. in systemd-machine-units
recipe) and it’s installed into the image.
Then you can specify writable directories on a recipe basis (e.g. in my-application.bb):
OVERLAYFS_WRITABLE_PATHS[data] = "/usr/share/my-custom-application"
To support several mount points you can use a different variable flag. Assuming we
want to have a writable location on the file system, but do not need that the data
survives a reboot, then we could have a mnt-overlay.mount
unit for a tmpfs
file system.
In your machine configuration:
OVERLAYFS_MOUNT_POINT[mnt-overlay] = "/mnt/overlay"
and then in your recipe:
OVERLAYFS_WRITABLE_PATHS[mnt-overlay] = "/usr/share/another-application"
On a practical note, your application recipe might require multiple
overlays to be mounted before running to avoid writing to the underlying
file system (which can be forbidden in case of read-only file system)
To achieve that overlayfs provides a systemd
helper service for mounting overlays. This helper service is named
${PN}-overlays.service
and can be depended on in your application recipe
(named application
in the following example) systemd
unit by adding
to the unit the following:
[Unit]
After=application-overlays.service
Requires=application-overlays.service
Note
The class does not support the /etc
directory itself, because systemd
depends on it.
In order to get /etc
in overlayfs, see overlayfs-etc.
5.92 overlayfs-etc
In order to have the /etc
directory in overlayfs a special handling at early
boot stage is required. The idea is to supply a custom init script that mounts
/etc
before launching the actual init program, because the latter already
requires /etc
to be mounted.
Example usage in image recipe:
IMAGE_FEATURES += "overlayfs-etc"
Note
This class must not be inherited directly. Use IMAGE_FEATURES or EXTRA_IMAGE_FEATURES
Your machine configuration should define at least the device, mount point, and file system type
you are going to use for overlayfs
:
OVERLAYFS_ETC_MOUNT_POINT = "/data"
OVERLAYFS_ETC_DEVICE = "/dev/mmcblk0p2"
OVERLAYFS_ETC_FSTYPE ?= "ext4"
To control more mount options you should consider setting mount options
(defaults
is used by default):
OVERLAYFS_ETC_MOUNT_OPTIONS = "wsync"
The class provides two options for /sbin/init
generation:
The default option is to rename the original
/sbin/init
to/sbin/init.orig
and place the generated init under original name, i.e./sbin/init
. It has an advantage that you won’t need to change any kernel parameters in order to make it work, but it poses a restriction that package-management can’t be used, because updating the init manager would remove the generated script.If you wish to keep original init as is, you can set:
OVERLAYFS_ETC_USE_ORIG_INIT_NAME = "0"
Then the generated init will be named
/sbin/preinit
and you would need to extend your kernel parameters manually in your bootloader configuration.
5.93 own-mirrors
The own-mirrors class makes it easier to set up your own PREMIRRORS from which to first fetch source before attempting to fetch it from the upstream specified in SRC_URI within each recipe.
To use this class, inherit it globally and specify SOURCE_MIRROR_URL. Here is an example:
INHERIT += "own-mirrors"
SOURCE_MIRROR_URL = "http://example.com/my-source-mirror"
You can specify only a single URL in SOURCE_MIRROR_URL.
5.94 package
The package class supports generating packages from a build’s
output. The core generic functionality is in package.bbclass
. The
code specific to particular package types resides in these
package-specific classes: package_deb,
package_rpm, package_ipk.
You can control the list of resulting package formats by using the
PACKAGE_CLASSES variable defined in your conf/local.conf
configuration file, which is located in the Build Directory.
When defining the variable, you can specify one or more package types.
Since images are generated from packages, a packaging class is needed
to enable image generation. The first class listed in this variable is
used for image generation.
If you take the optional step to set up a repository (package feed) on the development host that can be used by DNF, you can install packages from the feed while you are running the image on the target (i.e. runtime installation of packages). For more information, see the “Using Runtime Package Management” section in the Yocto Project Development Tasks Manual.
The package-specific class you choose can affect build-time performance and has space ramifications. In general, building a package with IPK takes about thirty percent less time as compared to using RPM to build the same or similar package. This comparison takes into account a complete build of the package with all dependencies previously built. The reason for this discrepancy is because the RPM package manager creates and processes more Metadata than the IPK package manager. Consequently, you might consider setting PACKAGE_CLASSES to “package_ipk” if you are building smaller systems.
Before making your package manager decision, however, you should consider some further things about using RPM:
RPM starts to provide more abilities than IPK due to the fact that it processes more Metadata. For example, this information includes individual file types, file checksum generation and evaluation on install, sparse file support, conflict detection and resolution for Multilib systems, ACID style upgrade, and repackaging abilities for rollbacks.
For smaller systems, the extra space used for the Berkeley Database and the amount of metadata when using RPM can affect your ability to perform on-device upgrades.
You can find additional information on the effects of the package class at these two Yocto Project mailing list links:
5.95 package_deb
The package_deb class provides support for creating packages that
use the Debian (i.e. .deb
) file format. The class ensures the
packages are written out in a .deb
file format to the
${
DEPLOY_DIR_DEB}
directory.
This class inherits the package class and
is enabled through the PACKAGE_CLASSES
variable in the local.conf
file.
5.96 package_ipk
The package_ipk class provides support for creating packages that
use the IPK (i.e. .ipk
) file format. The class ensures the packages
are written out in a .ipk
file format to the
${
DEPLOY_DIR_IPK}
directory.
This class inherits the package class and
is enabled through the PACKAGE_CLASSES
variable in the local.conf
file.
5.97 package_rpm
The package_rpm class provides support for creating packages that
use the RPM (i.e. .rpm
) file format. The class ensures the packages
are written out in a .rpm
file format to the
${
DEPLOY_DIR_RPM}
directory.
This class inherits the package class and
is enabled through the PACKAGE_CLASSES
variable in the local.conf
file.
5.98 packagedata
The packagedata class provides common functionality for reading
pkgdata
files found in PKGDATA_DIR. These
files contain information about each output package produced by the
OpenEmbedded build system.
This class is enabled by default because it is inherited by the package class.
5.99 packagegroup
The packagegroup class sets default values appropriate for package group recipes (e.g. PACKAGES, PACKAGE_ARCH, ALLOW_EMPTY, and so forth). It is highly recommended that all package group recipes inherit this class.
For information on how to use this class, see the “Customizing Images Using Custom Package Groups” section in the Yocto Project Development Tasks Manual.
Previously, this class was called the task
class.
5.100 patch
The patch class provides all functionality for applying patches during the do_patch task.
This class is enabled by default because it is inherited by the base class.
5.101 perlnative
When inherited by a recipe, the perlnative class supports using the native version of Perl built by the build system rather than using the version provided by the build host.
5.102 pypi
The pypi class sets variables appropriately for recipes that build Python modules from PyPI, the Python Package Index. By default it determines the PyPI package name based upon BPN (stripping the “python-” or “python3-” prefix off if present), however in some cases you may need to set it manually in the recipe by setting PYPI_PACKAGE.
Variables set by the pypi class include SRC_URI, SECTION, HOMEPAGE, UPSTREAM_CHECK_URI, UPSTREAM_CHECK_REGEX and CVE_PRODUCT.
5.103 python_flit_core
The python_flit_core class enables building Python modules which declare
the PEP-517 compliant
flit_core.buildapi
build-backend
in the [build-system]
section of pyproject.toml
(See PEP-518).
Python modules built with flit_core.buildapi
are pure Python (no
C
or Rust
extensions).
Internally this uses the python_pep517 class.
5.104 python_pep517
The python_pep517 class builds and installs a Python wheel
binary
archive (see PEP-517).
Recipes wouldn’t inherit this directly, instead typically another class will inherit this and add the relevant native dependencies.
Examples of classes which do this are python_flit_core, python_setuptools_build_meta, and python_poetry_core.
5.105 python_poetry_core
The python_poetry_core class enables building Python modules which use the Poetry Core build system.
Internally this uses the python_pep517 class.
5.106 python_pyo3
The python_pyo3 class helps make sure that Python extensions written in Rust and built with PyO3, properly set up the environment for cross compilation.
This class is internal to the python-setuptools3_rust class and is not meant to be used directly in recipes.
5.107 python-setuptools3_rust
The python-setuptools3_rust class enables building Python extensions implemented in Rust with PyO3, which allows to compile and distribute Python extensions written in Rust as easily as if they were written in C.
This class inherits the setuptools3 and python_pyo3 classes.
5.108 pixbufcache
The pixbufcache class generates the proper post-install and
post-remove (postinst/postrm) scriptlets for packages that install
pixbuf loaders, which are used with gdk-pixbuf
. These scriptlets
call update_pixbuf_cache
to add the pixbuf loaders to the cache.
Since the cache files are architecture-specific, update_pixbuf_cache
is run using QEMU if the postinst scriptlets need to be run on the build
host during image creation.
If the pixbuf loaders being installed are in packages other than the recipe’s main package, set PIXBUF_PACKAGES to specify the packages containing the loaders.
5.109 pkgconfig
The pkgconfig class provides a standard way to get header and
library information by using pkg-config
. This class aims to smooth
integration of pkg-config
into libraries that use it.
During staging, BitBake installs pkg-config
data into the
sysroots/
directory. By making use of sysroot functionality within
pkg-config
, the pkgconfig class no longer has to manipulate the
files.
5.110 populate_sdk
The populate_sdk class provides support for SDK-only recipes. For information on advantages gained when building a cross-development toolchain using the do_populate_sdk task, see the “Building an SDK Installer” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
5.111 populate_sdk_*
The populate_sdk_* classes support SDK creation and consist of the following classes:
populate_sdk_base: The base class supporting SDK creation under all package managers (i.e. DEB, RPM, and opkg).
populate_sdk_deb: Supports creation of the SDK given the Debian package manager.
populate_sdk_rpm: Supports creation of the SDK given the RPM package manager.
populate_sdk_ipk: Supports creation of the SDK given the opkg (IPK format) package manager.
populate_sdk_ext: Supports extensible SDK creation under all package managers.
The populate_sdk_base class inherits the appropriate
populate_sdk_*
(i.e. deb
, rpm
, and ipk
) based on
IMAGE_PKGTYPE.
The base class ensures all source and destination directories are
established and then populates the SDK. After populating the SDK, the
populate_sdk_base class constructs two sysroots:
${
SDK_ARCH}-nativesdk
, which
contains the cross-compiler and associated tooling, and the target,
which contains a target root filesystem that is configured for the SDK
usage. These two images reside in SDK_OUTPUT,
which consists of the following:
${SDK_OUTPUT}/${SDK_ARCH}-nativesdk-pkgs
${SDK_OUTPUT}/${SDKTARGETSYSROOT}/target-pkgs
Finally, the base populate SDK class creates the toolchain environment setup script, the tarball of the SDK, and the installer.
The respective populate_sdk_deb, populate_sdk_rpm, and populate_sdk_ipk classes each support the specific type of SDK. These classes are inherited by and used with the populate_sdk_base class.
For more information on the cross-development toolchain generation, see the “Cross-Development Toolchain Generation” section in the Yocto Project Overview and Concepts Manual. For information on advantages gained when building a cross-development toolchain using the do_populate_sdk task, see the “Building an SDK Installer” section in the Yocto Project Application Development and the Extensible Software Development Kit (eSDK) manual.
5.112 prexport
The prexport class provides functionality for exporting PR values.
Note
This class is not intended to be used directly. Rather, it is enabled
when using “bitbake-prserv-tool export
”.
5.113 primport
The primport class provides functionality for importing PR values.
Note
This class is not intended to be used directly. Rather, it is enabled
when using “bitbake-prserv-tool import
”.
5.114 prserv
The prserv class provides functionality for using a PR service in order to automatically manage the incrementing of the PR variable for each recipe.
This class is enabled by default because it is inherited by the package class. However, the OpenEmbedded build system will not enable the functionality of this class unless PRSERV_HOST has been set.
5.115 ptest
The ptest class provides functionality for packaging and installing runtime tests for recipes that build software that provides these tests.
This class is intended to be inherited by individual recipes. However, the class’ functionality is largely disabled unless “ptest” appears in DISTRO_FEATURES. See the “Testing Packages With ptest” section in the Yocto Project Development Tasks Manual for more information on ptest.
5.116 ptest-cargo
The ptest-cargo class is a class which extends the
cargo class and adds compile_ptest_cargo
and
install_ptest_cargo
steps to respectively build and install
test suites defined in the Cargo.toml
file, into a dedicated
-ptest
package.
5.117 ptest-gnome
Enables package tests (ptests) specifically for GNOME packages, which
have tests intended to be executed with gnome-desktop-testing
.
For information on setting up and running ptests, see the “Testing Packages With ptest” section in the Yocto Project Development Tasks Manual.
5.118 python3-dir
The python3-dir class provides the base version, location, and site package location for Python 3.
5.119 python3native
The python3native class supports using the native version of Python 3 built by the build system rather than support of the version provided by the build host.
5.120 python3targetconfig
The python3targetconfig class supports using the native version of Python
3 built by the build system rather than support of the version provided
by the build host, except that the configuration for the target machine
is accessible (such as correct installation directories). This also adds a
dependency on target python3
, so should only be used where appropriate
in order to avoid unnecessarily lengthening builds.
5.121 qemu
The qemu class provides functionality for recipes that either need QEMU or test for the existence of QEMU. Typically, this class is used to run programs for a target system on the build host using QEMU’s application emulation mode.
5.122 recipe_sanity
The recipe_sanity class checks for the presence of any host system recipe prerequisites that might affect the build (e.g. variables that are set or software that is present).
5.123 relocatable
The relocatable class enables relocation of binaries when they are installed into the sysroot.
This class makes use of the chrpath class and is used by both the cross and native classes.
5.124 remove-libtool
The remove-libtool class adds a post function to the
do_install task to remove all .la
files
installed by libtool
. Removing these files results in them being
absent from both the sysroot and target packages.
If a recipe needs the .la
files to be installed, then the recipe can
override the removal by setting REMOVE_LIBTOOL_LA
to “0” as follows:
REMOVE_LIBTOOL_LA = "0"
Note
The remove-libtool class is not enabled by default.
5.125 report-error
The report-error class supports enabling the error reporting tool”, which allows you to submit build error information to a central database.
The class collects debug information for recipe, recipe version, task,
machine, distro, build system, target system, host distro, branch,
commit, and log. From the information, report files using a JSON format
are created and stored in
${
LOG_DIR}/error-report
.
5.126 rm_work
The rm_work class supports deletion of temporary workspace, which can ease your hard drive demands during builds.
The OpenEmbedded build system can use a substantial amount of disk space
during the build process. A portion of this space is the work files
under the ${TMPDIR}/work
directory for each recipe. Once the build
system generates the packages for a recipe, the work files for that
recipe are no longer needed. However, by default, the build system
preserves these files for inspection and possible debugging purposes. If
you would rather have these files deleted to save disk space as the build
progresses, you can enable rm_work by adding the following to
your local.conf
file, which is found in the Build Directory:
INHERIT += "rm_work"
If you are modifying and building source code out of the work directory for a
recipe, enabling rm_work will potentially result in your
changes to the source being lost. To exclude some recipes from having their work
directories deleted by rm_work, you can add the names of the
recipe or recipes you are working on to the RM_WORK_EXCLUDE variable,
which can also be set in your local.conf
file. Here is an example:
RM_WORK_EXCLUDE += "busybox glibc"
5.127 rootfs*
The rootfs* classes support creating the root filesystem for an image and consist of the following classes:
The rootfs-postcommands class, which defines filesystem post-processing functions for image recipes.
The rootfs_deb class, which supports creation of root filesystems for images built using
.deb
packages.The rootfs_rpm class, which supports creation of root filesystems for images built using
.rpm
packages.The rootfs_ipk class, which supports creation of root filesystems for images built using
.ipk
packages.The rootfsdebugfiles class, which installs additional files found on the build host directly into the root filesystem.
The root filesystem is created from packages using one of the rootfs* files as determined by the PACKAGE_CLASSES variable.
For information on how root filesystem images are created, see the “Image Generation” section in the Yocto Project Overview and Concepts Manual.
5.128 rust
The rust class is an internal class which is just used in the “rust” recipe, to build the Rust compiler and runtime library. Except for this recipe, it is not intended to be used directly.
5.129 rust-common
The rust-common class is an internal class to the cargo_common and rust classes and is not intended to be used directly.
5.130 sanity
The sanity class checks to see if prerequisite software is present
on the host system so that users can be notified of potential problems
that might affect their build. The class also performs basic user
configuration checks from the local.conf
configuration file to
prevent common mistakes that cause build failures. Distribution policy
usually determines whether to include this class.
5.131 scons
The scons class supports recipes that need to build software that uses the SCons build system. You can use the EXTRA_OESCONS variable to specify additional configuration options you want to pass SCons command line.
5.132 sdl
The sdl class supports recipes that need to build software that uses the Simple DirectMedia Layer (SDL) library.
5.133 python_setuptools_build_meta
The python_setuptools_build_meta class enables building
Python modules which declare the
PEP-517 compliant
setuptools.build_meta
build-backend
in the [build-system]
section of pyproject.toml
(See PEP-518).
Python modules built with setuptools.build_meta
can be pure Python or
include C
or Rust
extensions).
Internally this uses the python_pep517 class.
5.134 setuptools3
The setuptools3 class supports Python version 3.x extensions
that use build systems based on setuptools
(e.g. only have a setup.py
and have not migrated to the official pyproject.toml
format). If your recipe
uses these build systems, the recipe needs to inherit the
setuptools3 class.
Note
The setuptools3 class do_compile task now calls
setup.py bdist_wheel
to build thewheel
binary archive format (See PEP-427).A consequence of this is that legacy software still using deprecated
distutils
from the Python standard library cannot be packaged aswheels
. A common solution is the replacefrom distutils.core import setup
withfrom setuptools import setup
.Note
The setuptools3 class do_install task now installs the
wheel
binary archive. In current versions ofsetuptools
the legacysetup.py install
method is deprecated. If thesetup.py
cannot be used with wheels, for example it creates files outside of the Python module or standard entry points, then setuptools3_legacy should be used.
5.135 setuptools3_legacy
The setuptools3_legacy class supports
Python version 3.x extensions that use build systems based on setuptools
(e.g. only have a setup.py
and have not migrated to the official
pyproject.toml
format). Unlike setuptools3,
this uses the traditional setup.py
build
and install
commands and
not wheels. This use of setuptools
like this is
deprecated
but still relatively common.
5.136 setuptools3-base
The setuptools3-base class provides a reusable base for
other classes that support building Python version 3.x extensions. If you need
functionality that is not provided by the setuptools3 class,
you may want to inherit setuptools3-base
. Some recipes do not need the tasks
in the setuptools3 class and inherit this class instead.
5.137 sign_rpm
The sign_rpm class supports generating signed RPM packages.
5.138 siteconfig
The siteconfig class provides functionality for handling site configuration. The class is used by the autotools* class to accelerate the do_configure task.
5.139 siteinfo
The siteinfo class provides information about the targets that might be needed by other classes or recipes.
As an example, consider Autotools, which can require tests that must
execute on the target hardware. Since this is not possible in general
when cross compiling, site information is used to provide cached test
results so these tests can be skipped over but still make the correct
values available. The meta/site directory
contains test results
sorted into different categories such as architecture, endianness, and
the libc
used. Site information provides a list of files containing
data relevant to the current build in the CONFIG_SITE variable that
Autotools automatically picks up.
The class also provides variables like SITEINFO_ENDIANNESS and SITEINFO_BITS that can be used elsewhere in the metadata.
5.140 sstate
The sstate class provides support for Shared State (sstate). By default, the class is enabled through the INHERIT_DISTRO variable’s default value.
For more information on sstate, see the “Shared State Cache” section in the Yocto Project Overview and Concepts Manual.
5.141 staging
The staging class installs files into individual recipe work directories for sysroots. The class contains the following key tasks:
The do_populate_sysroot task, which is responsible for handing the files that end up in the recipe sysroots.
The do_prepare_recipe_sysroot task (a “partner” task to the
populate_sysroot
task), which installs the files into the individual recipe work directories (i.e. WORKDIR).
The code in the staging class is complex and basically works in two stages:
Stage One: The first stage addresses recipes that have files they want to share with other recipes that have dependencies on the originating recipe. Normally these dependencies are installed through the do_install task into
${
D}
. The do_populate_sysroot task copies a subset of these files into${SYSROOT_DESTDIR}
. This subset of files is controlled by the SYSROOT_DIRS, SYSROOT_DIRS_NATIVE, and SYSROOT_DIRS_IGNORE variables.Note
Additionally, a recipe can customize the files further by declaring a processing function in the SYSROOT_PREPROCESS_FUNCS variable.
A shared state (sstate) object is built from these files and the files are placed into a subdirectory of build/tmp/sysroots-components/. The files are scanned for hardcoded paths to the original installation location. If the location is found in text files, the hardcoded locations are replaced by tokens and a list of the files needing such replacements is created. These adjustments are referred to as “FIXMEs”. The list of files that are scanned for paths is controlled by the SSTATE_SCAN_FILES variable.
Stage Two: The second stage addresses recipes that want to use something from another recipe and declare a dependency on that recipe through the DEPENDS variable. The recipe will have a do_prepare_recipe_sysroot task and when this task executes, it creates the
recipe-sysroot
andrecipe-sysroot-native
in the recipe work directory (i.e. WORKDIR). The OpenEmbedded build system creates hard links to copies of the relevant files fromsysroots-components
into the recipe work directory.Note
If hard links are not possible, the build system uses actual copies.
The build system then addresses any “FIXMEs” to paths as defined from the list created in the first stage.
Finally, any files in
${bindir}
within the sysroot that have the prefix “postinst-
” are executed.Note
Although such sysroot post installation scripts are not recommended for general use, the files do allow some issues such as user creation and module indexes to be addressed.
Because recipes can have other dependencies outside of DEPENDS (e.g.
do_unpack[depends] += "tar-native:do_populate_sysroot"
), the sysroot creation functionextend_recipe_sysroot
is also added as a pre-function for those tasks whose dependencies are not through DEPENDS but operate similarly.When installing dependencies into the sysroot, the code traverses the dependency graph and processes dependencies in exactly the same way as the dependencies would or would not be when installed from sstate. This processing means, for example, a native tool would have its native dependencies added but a target library would not have its dependencies traversed or installed. The same sstate dependency code is used so that builds should be identical regardless of whether sstate was used or not. For a closer look, see the
setscene_depvalid()
function in the sstate class.The build system is careful to maintain manifests of the files it installs so that any given dependency can be installed as needed. The sstate hash of the installed item is also stored so that if it changes, the build system can reinstall it.
5.142 syslinux
The syslinux class provides syslinux-specific functions for building bootable images.
The class supports the following variables:
INITRD: Indicates list of filesystem images to concatenate and use as an initial RAM disk (initrd). This variable is optional.
ROOTFS: Indicates a filesystem image to include as the root filesystem. This variable is optional.
AUTO_SYSLINUXMENU: Enables creating an automatic menu when set to “1”.
LABELS: Lists targets for automatic configuration.
APPEND: Lists append string overrides for each label.
SYSLINUX_OPTS: Lists additional options to add to the syslinux file. Semicolon characters separate multiple options.
SYSLINUX_SPLASH: Lists a background for the VGA boot menu when you are using the boot menu.
SYSLINUX_DEFAULT_CONSOLE: Set to “console=ttyX” to change kernel boot default console.
SYSLINUX_SERIAL: Sets an alternate serial port. Or, turns off serial when the variable is set with an empty string.
SYSLINUX_SERIAL_TTY: Sets an alternate “console=tty…” kernel boot argument.
5.143 systemd
The systemd class provides support for recipes that install systemd unit files.
The functionality for this class is disabled unless you have “systemd” in DISTRO_FEATURES.
Under this class, the recipe or Makefile (i.e. whatever the recipe is
calling during the do_install task)
installs unit files into
${
D}${systemd_unitdir}/system
. If the unit
files being installed go into packages other than the main package, you
need to set SYSTEMD_PACKAGES in your
recipe to identify the packages in which the files will be installed.
You should set SYSTEMD_SERVICE to the
name of the service file. You should also use a package name override to
indicate the package to which the value applies. If the value applies to
the recipe’s main package, use ${
PN}
. Here
is an example from the connman recipe:
SYSTEMD_SERVICE:${PN} = "connman.service"
Services are set up to start on boot automatically unless you have set SYSTEMD_AUTO_ENABLE to “disable”.
For more information on systemd, see the “Selecting an Initialization Manager” section in the Yocto Project Development Tasks Manual.
5.144 systemd-boot
The systemd-boot class provides functions specific to the systemd-boot bootloader for building bootable images. This is an internal class and is not intended to be used directly.
Note
The systemd-boot class is a result from merging the gummiboot
class
used in previous Yocto Project releases with the systemd
project.
Set the EFI_PROVIDER variable to “systemd-boot” to use this class. Doing so creates a standalone EFI bootloader that is not dependent on systemd.
For information on more variables used and supported in this class, see the SYSTEMD_BOOT_CFG, SYSTEMD_BOOT_ENTRIES, and SYSTEMD_BOOT_TIMEOUT variables.
You can also see the Systemd-boot documentation for more information.
5.145 terminal
The terminal class provides support for starting a terminal session. The OE_TERMINAL variable controls which terminal emulator is used for the session.
Other classes use the terminal class anywhere a separate terminal session needs to be started. For example, the patch class assuming PATCHRESOLVE is set to “user”, the cml1 class, and the devshell class all use the terminal class.
5.146 testimage
The testimage class supports running automated tests against images using QEMU and on actual hardware. The classes handle loading the tests and starting the image. To use the classes, you need to perform steps to set up the environment.
To enable this class, add the following to your configuration:
IMAGE_CLASSES += "testimage"
The tests are commands that run on the target system over ssh
. Each
test is written in Python and makes use of the unittest
module.
The testimage class runs tests on an image when called using the following:
$ bitbake -c testimage image
Alternatively, if you wish to have tests automatically run for each image after it is built, you can set TESTIMAGE_AUTO:
TESTIMAGE_AUTO = "1"
For information on how to enable, run, and create new tests, see the “Performing Automated Runtime Testing” section in the Yocto Project Development Tasks Manual.
5.147 testsdk
This class supports running automated tests against software development kits (SDKs). The testsdk class runs tests on an SDK when called using the following:
$ bitbake -c testsdk image
Note
Best practices include using IMAGE_CLASSES rather than INHERIT to inherit the testsdk class for automated SDK testing.
5.148 texinfo
This class should be inherited by recipes whose upstream packages invoke
the texinfo
utilities at build-time. Native and cross recipes are
made to use the dummy scripts provided by texinfo-dummy-native
, for
improved performance. Target architecture recipes use the genuine
Texinfo utilities. By default, they use the Texinfo utilities on the
host system.
Note
If you want to use the Texinfo recipe shipped with the build system, you can remove “texinfo-native” from ASSUME_PROVIDED and makeinfo from SANITY_REQUIRED_UTILITIES.
5.149 toaster
The toaster class collects information about packages and images and sends them as events that the BitBake user interface can receive. The class is enabled when the Toaster user interface is running.
This class is not intended to be used directly.
5.150 toolchain-scripts
The toolchain-scripts class provides the scripts used for setting up the environment for installed SDKs.
5.151 typecheck
The typecheck class provides support for validating the values of variables set at the configuration level against their defined types. The OpenEmbedded build system allows you to define the type of a variable using the “type” varflag. Here is an example:
IMAGE_FEATURES[type] = "list"
5.152 uboot-config
The uboot-config class provides support for U-Boot configuration for a machine. Specify the machine in your recipe as follows:
UBOOT_CONFIG ??= <default>
UBOOT_CONFIG[foo] = "config,images,binary"
You can also specify the machine using this method:
UBOOT_MACHINE = "config"
See the UBOOT_CONFIG and UBOOT_MACHINE variables for additional information.
5.153 uboot-sign
The uboot-sign class provides support for U-Boot verified boot. It is intended to be inherited from U-Boot recipes.
The variables used by this class are:
SPL_MKIMAGE_DTCOPTS: DTC options for U-Boot
mkimage
when building the FIT image.SPL_SIGN_ENABLE: enable signing the FIT image.
SPL_SIGN_KEYDIR: directory containing the signing keys.
SPL_SIGN_KEYNAME: base filename of the signing keys.
UBOOT_FIT_ADDRESS_CELLS:
#address-cells
value for the FIT image.UBOOT_FIT_DESC: description string encoded into the FIT image.
UBOOT_FIT_GENERATE_KEYS: generate the keys if they don’t exist yet.
UBOOT_FIT_HASH_ALG: hash algorithm for the FIT image.
UBOOT_FIT_KEY_GENRSA_ARGS:
openssl genrsa
arguments.UBOOT_FIT_KEY_REQ_ARGS:
openssl req
arguments.UBOOT_FIT_SIGN_ALG: signature algorithm for the FIT image.
UBOOT_FIT_SIGN_NUMBITS: size of the private key for FIT image signing.
UBOOT_FIT_KEY_SIGN_PKCS: algorithm for the public key certificate for FIT image signing.
UBOOT_FITIMAGE_ENABLE: enable the generation of a U-Boot FIT image.
UBOOT_MKIMAGE_DTCOPTS: DTC options for U-Boot
mkimage
when rebuilding the FIT image containing the kernel.
See U-Boot’s documentation for details about verified boot and the signature process.
See also the description of kernel-fitimage class, which this class imitates.
5.154 uninative
Attempts to isolate the build system from the host distribution’s C
library in order to make re-use of native shared state artifacts across
different host distributions practical. With this class enabled, a
tarball containing a pre-built C library is downloaded at the start of
the build. In the Poky reference distribution this is enabled by default
through meta/conf/distro/include/yocto-uninative.inc
. Other
distributions that do not derive from poky can also
“require conf/distro/include/yocto-uninative.inc
” to use this.
Alternatively if you prefer, you can build the uninative-tarball recipe
yourself, publish the resulting tarball (e.g. via HTTP) and set
UNINATIVE_URL
and UNINATIVE_CHECKSUM
appropriately. For an
example, see the meta/conf/distro/include/yocto-uninative.inc
.
The uninative class is also used unconditionally by the extensible
SDK. When building the extensible SDK, uninative-tarball
is built
and the resulting tarball is included within the SDK.
5.155 update-alternatives
The update-alternatives class helps the alternatives system when
multiple sources provide the same command. This situation occurs when
several programs that have the same or similar function are installed
with the same name. For example, the ar
command is available from
the busybox
, binutils
and elfutils
packages. The
update-alternatives class handles renaming the binaries so that
multiple packages can be installed without conflicts. The ar
command
still works regardless of which packages are installed or subsequently
removed. The class renames the conflicting binary in each package and
symlinks the highest priority binary during installation or removal of
packages.
To use this class, you need to define a number of variables:
These variables list alternative commands needed by a package, provide pathnames for links, default links for targets, and so forth. For details on how to use this class, see the comments in the update-alternatives.bbclass file.
Note
You can use the update-alternatives
command directly in your recipes.
However, this class simplifies things in most cases.
5.156 update-rc.d
The update-rc.d class uses update-rc.d
to safely install an
initialization script on behalf of the package. The OpenEmbedded build
system takes care of details such as making sure the script is stopped
before a package is removed and started when the package is installed.
Three variables control this class: INITSCRIPT_PACKAGES, INITSCRIPT_NAME and INITSCRIPT_PARAMS. See the variable links for details.
5.157 useradd*
The useradd* classes support the addition of users or groups for usage by the package on the target. For example, if you have packages that contain system services that should be run under their own user or group, you can use these classes to enable creation of the user or group. The meta-skeleton/recipes-skeleton/useradd/useradd-example.bb recipe in the Source Directory provides a simple example that shows how to add three users and groups to two packages.
The useradd_base class provides basic functionality for user or groups settings.
The useradd* classes support the USERADD_PACKAGES, USERADD_PARAM, GROUPADD_PARAM, and GROUPMEMS_PARAM variables.
The useradd-staticids class supports the addition of users or groups
that have static user identification (uid
) and group identification
(gid
) values.
The default behavior of the OpenEmbedded build system for assigning
uid
and gid
values when packages add users and groups during
package install time is to add them dynamically. This works fine for
programs that do not care what the values of the resulting users and
groups become. In these cases, the order of the installation determines
the final uid
and gid
values. However, if non-deterministic
uid
and gid
values are a problem, you can override the default,
dynamic application of these values by setting static values. When you
set static values, the OpenEmbedded build system looks in
BBPATH for files/passwd
and files/group
files for the values.
To use static uid
and gid
values, you need to set some variables. See
the USERADDEXTENSION, USERADD_UID_TABLES,
USERADD_GID_TABLES, and USERADD_ERROR_DYNAMIC variables.
You can also see the useradd* class for additional
information.
Note
You do not use the useradd-staticids class directly. You either enable
or disable the class by setting the USERADDEXTENSION variable. If you
enable or disable the class in a configured system, TMPDIR might
contain incorrect uid
and gid
values. Deleting the TMPDIR
directory will correct this condition.
5.158 utility-tasks
The utility-tasks class provides support for various “utility” type tasks that are applicable to all recipes, such as do_clean and do_listtasks.
This class is enabled by default because it is inherited by the base class.
5.159 utils
The utils class provides some useful Python functions that are
typically used in inline Python expressions (e.g. ${@...}
). One
example use is for bb.utils.contains()
.
This class is enabled by default because it is inherited by the base class.
5.160 vala
The vala class supports recipes that need to build software written using the Vala programming language.
5.161 waf
The waf class supports recipes that need to build software that uses the Waf build system. You can use the EXTRA_OECONF or PACKAGECONFIG_CONFARGS variables to specify additional configuration options to be passed on the Waf command line.
6 Tasks
Tasks are units of execution for BitBake. Recipes (.bb
files) use
tasks to complete configuring, compiling, and packaging software. This
chapter provides a reference of the tasks defined in the OpenEmbedded
build system.
6.1 Normal Recipe Build Tasks
The following sections describe normal tasks associated with building a recipe. For more information on tasks and dependencies, see the “Tasks” and “Dependencies” sections in the BitBake User Manual.
6.1.1 do_build
The default task for all recipes. This task depends on all other normal tasks required to build a recipe.
6.1.2 do_compile
Compiles the source code. This task runs with the current working
directory set to ${
B}
.
The default behavior of this task is to run the oe_runmake
function
if a makefile (Makefile
, makefile
, or GNUmakefile
) is found.
If no such file is found, the do_compile task does nothing.
6.1.3 do_compile_ptest_base
Compiles the runtime test suite included in the software being built.
6.1.4 do_configure
Configures the source by enabling and disabling any build-time and
configuration options for the software being built. The task runs with
the current working directory set to ${
B}
.
The default behavior of this task is to run oe_runmake clean
if a
makefile (Makefile
, makefile
, or GNUmakefile
) is found and
CLEANBROKEN is not set to “1”. If no such
file is found or the CLEANBROKEN variable is set to “1”, the
do_configure task does nothing.
6.1.5 do_configure_ptest_base
Configures the runtime test suite included in the software being built.
6.1.6 do_deploy
Writes output files that are to be deployed to
${
DEPLOY_DIR_IMAGE}
. The
task runs with the current working directory set to
${
B}
.
Recipes implementing this task should inherit the
deploy class and should write the output
to ${
DEPLOYDIR}
, which is not to be
confused with ${DEPLOY_DIR}
. The deploy class sets up
do_deploy as a shared state (sstate) task that can be accelerated
through sstate use. The sstate mechanism takes care of copying the
output from ${DEPLOYDIR}
to ${DEPLOY_DIR_IMAGE}
.
Note
Do not write the output directly to ${DEPLOY_DIR_IMAGE}
, as this causes
the sstate mechanism to malfunction.
The do_deploy task is not added as a task by default and consequently needs to be added manually. If you want the task to run after do_compile, you can add it by doing the following:
addtask deploy after do_compile
Adding do_deploy after other tasks works the same way.
Note
You do not need to add before do_build
to the addtask
command
(though it is harmless), because the base class contains the following:
do_build[recrdeptask] += "do_deploy"
See the “Dependencies” section in the BitBake User Manual for more information.
If the do_deploy task re-executes, any previous output is removed (i.e. “cleaned”).
6.1.7 do_fetch
Fetches the source code. This task uses the SRC_URI variable and the argument’s prefix to determine the correct fetcher module.
6.1.8 do_image
Starts the image generation process. The do_image task runs after the OpenEmbedded build system has run the do_rootfs task during which packages are identified for installation into the image and the root filesystem is created, complete with post-processing.
The do_image task performs pre-processing on the image through the IMAGE_PREPROCESS_COMMAND and dynamically generates supporting do_image_* tasks as needed.
For more information on image creation, see the “Image Generation” section in the Yocto Project Overview and Concepts Manual.
6.1.9 do_image_complete
Completes the image generation process. The do_image_complete task runs after the OpenEmbedded build system has run the do_image task during which image pre-processing occurs and through dynamically generated do_image_* tasks the image is constructed.
The do_image_complete task performs post-processing on the image through the IMAGE_POSTPROCESS_COMMAND.
For more information on image creation, see the “Image Generation” section in the Yocto Project Overview and Concepts Manual.
6.1.10 do_install
Copies files that are to be packaged into the holding area
${
D}
. This task runs with the current
working directory set to ${
B}
, which is the
compilation directory. The do_install task, as well as other tasks
that either directly or indirectly depend on the installed files (e.g.
do_package, do_package_write_*, and
do_rootfs), run under
fakeroot.
Note
When installing files, be careful not to set the owner and group IDs
of the installed files to unintended values. Some methods of copying
files, notably when using the recursive cp
command, can preserve
the UID and/or GID of the original file, which is usually not what
you want. The host-user-contaminated
QA check checks for files
that probably have the wrong ownership.
Safe methods for installing files include the following:
The
install
utility. This utility is the preferred method.The
cp
command with the--no-preserve=ownership
option.The
tar
command with the--no-same-owner
option. See thebin_package.bbclass
file in themeta/classes-recipe
subdirectory of the Source Directory for an example.
6.1.11 do_install_ptest_base
Copies the runtime test suite files from the compilation directory to a holding area.
6.1.12 do_package
Analyzes the content of the holding area
${
D}
and splits the content into subsets
based on available packages and files. This task makes use of the
PACKAGES and FILES
variables.
The do_package task, in conjunction with the do_packagedata task, also saves some important package metadata. For additional information, see the PKGDESTWORK variable and the “Automatically Added Runtime Dependencies” section in the Yocto Project Overview and Concepts Manual.
6.1.13 do_package_qa
Runs QA checks on packaged files. For more information on these checks, see the insane class.
6.1.14 do_package_write_deb
Creates Debian packages (i.e. *.deb
files) and places them in the
${
DEPLOY_DIR_DEB}
directory in
the package feeds area. For more information, see the
“Package Feeds” section in
the Yocto Project Overview and Concepts Manual.
6.1.15 do_package_write_ipk
Creates IPK packages (i.e. *.ipk
files) and places them in the
${
DEPLOY_DIR_IPK}
directory in
the package feeds area. For more information, see the
“Package Feeds” section in
the Yocto Project Overview and Concepts Manual.
6.1.16 do_package_write_rpm
Creates RPM packages (i.e. *.rpm
files) and places them in the
${
DEPLOY_DIR_RPM}
directory in
the package feeds area. For more information, see the
“Package Feeds” section in
the Yocto Project Overview and Concepts Manual.
6.1.17 do_packagedata
Saves package metadata generated by the do_package task in PKGDATA_DIR to make it available globally.
6.1.18 do_patch
Locates patch files and applies them to the source code.
After fetching and unpacking source files, the build system uses the recipe’s SRC_URI statements to locate and apply patch files to the source code.
Note
The build system uses the FILESPATH variable to determine the default set of directories when searching for patches.
Patch files, by default, are *.patch
and *.diff
files created
and kept in a subdirectory of the directory holding the recipe file. For
example, consider the
bluez5
recipe from the OE-Core layer (i.e. poky/meta
):
poky/meta/recipes-connectivity/bluez5
This recipe has two patch files located here:
poky/meta/recipes-connectivity/bluez5/bluez5
In the bluez5
recipe, the SRC_URI statements point to the source
and patch files needed to build the package.
Note
In the case for the bluez5_5.48.bb
recipe, the SRC_URI statements
are from an include file bluez5.inc
.
As mentioned earlier, the build system treats files whose file types are
.patch
and .diff
as patch files. However, you can use the
“apply=yes” parameter with the SRC_URI statement to indicate any
file as a patch file:
SRC_URI = " \
git://path_to_repo/some_package \
file://file;apply=yes \
"
Conversely, if you have a file whose file type is .patch
or .diff
and you want to exclude it so that the do_patch task does not apply
it during the patch phase, you can use the “apply=no” parameter with the
SRC_URI statement:
SRC_URI = " \
git://path_to_repo/some_package \
file://file1.patch \
file://file2.patch;apply=no \
"
In the previous example file1.patch
would be applied as a patch by default
while file2.patch
would not be applied.
You can find out more about the patching process in the “Patching” section in the Yocto Project Overview and Concepts Manual and the “Patching Code” section in the Yocto Project Development Tasks Manual.
6.1.19 do_populate_lic
Writes license information for the recipe that is collected later when the image is constructed.
6.1.20 do_populate_sdk
Creates the file and directory structure for an installable SDK. See the “SDK Generation” section in the Yocto Project Overview and Concepts Manual for more information.
6.1.21 do_populate_sdk_ext
Creates the file and directory structure for an installable extensible SDK (eSDK). See the “SDK Generation” section in the Yocto Project Overview and Concepts Manual for more information.
6.1.22 do_populate_sysroot
Stages (copies) a subset of the files installed by the
do_install task into the appropriate
sysroot. For information on how to access these files from other
recipes, see the STAGING_DIR* variables.
Directories that would typically not be needed by other recipes at build
time (e.g. /etc
) are not copied by default.
For information on what directories are copied by default, see the SYSROOT_DIRS* variables. You can change these variables inside your recipe if you need to make additional (or fewer) directories available to other recipes at build time.
The do_populate_sysroot task is a shared state (sstate) task, which means that the task can be accelerated through sstate use. Realize also that if the task is re-executed, any previous output is removed (i.e. “cleaned”).
6.1.23 do_prepare_recipe_sysroot
Installs the files into the individual recipe specific sysroots (i.e.
recipe-sysroot
and recipe-sysroot-native
under
${
WORKDIR}
based upon the
dependencies specified by DEPENDS). See the
“staging” class for more information.
6.1.24 do_rm_work
Removes work files after the OpenEmbedded build system has finished with them. You can learn more by looking at the “rm_work” section.
6.1.25 do_unpack
Unpacks the source code into a working directory pointed to by
${
WORKDIR}
. The S
variable also plays a role in where unpacked source files ultimately
reside. For more information on how source files are unpacked, see the
“Source Fetching”
section in the Yocto Project Overview and Concepts Manual and also see
the WORKDIR and S variable descriptions.
6.2 Manually Called Tasks
These tasks are typically manually triggered (e.g. by using the
bitbake -c
command-line option):
6.2.1 do_checkuri
Validates the SRC_URI value.
6.2.2 do_clean
Removes all output files for a target from the do_unpack task forward (i.e. do_unpack, do_configure, do_compile, do_install, and do_package).
You can run this task using BitBake as follows:
$ bitbake -c clean recipe
Running this task does not remove the
sstate cache files.
Consequently, if no changes have been made and the recipe is rebuilt
after cleaning, output files are simply restored from the sstate cache.
If you want to remove the sstate cache files for the recipe, you need to
use the do_cleansstate task instead
(i.e. bitbake -c cleansstate
recipe).
6.2.3 do_cleanall
Removes all output files, shared state (sstate) cache, and downloaded source files for a target (i.e. the contents of DL_DIR). Essentially, the do_cleanall task is identical to the do_cleansstate task with the added removal of downloaded source files.
You can run this task using BitBake as follows:
$ bitbake -c cleanall recipe
You should never use the do_cleanall task in a normal scenario. If you want to start fresh with the do_fetch task, use instead:
$ bitbake -f -c fetch recipe
Note
The reason to prefer bitbake -f -c fetch
is that the
do_cleanall task would break in some cases, such as:
$ bitbake -c fetch recipe
$ bitbake -c cleanall recipe-native
$ bitbake -c unpack recipe
because after step 1 there is a stamp file for the
do_fetch task of recipe
, and it won’t be removed at
step 2 because step 2 uses a different work directory. So the unpack task
at step 3 will try to extract the downloaded archive and fail as it has
been deleted in step 2.
Note that this also applies to BitBake from concurrent processes when a shared download directory (DL_DIR) is setup.
6.2.4 do_cleansstate
Removes all output files and shared state (sstate) cache for a target. Essentially, the do_cleansstate task is identical to the do_clean task with the added removal of shared state (sstate) cache.
You can run this task using BitBake as follows:
$ bitbake -c cleansstate recipe
When you run the do_cleansstate task, the OpenEmbedded build system no longer uses any sstate. Consequently, building the recipe from scratch is guaranteed.
Note
Using do_cleansstate with a shared SSTATE_DIR is not recommended because it could trigger an error during the build of a separate BitBake instance. This is because the builds check sstate “up front” but download the files later, so it if is deleted in the meantime, it will cause an error but not a total failure as it will rebuild it.
The reliable and preferred way to force a new build is to use bitbake
-f
instead.
Note
The do_cleansstate task cannot remove sstate from a remote sstate mirror. If you need to build a target from scratch using remote mirrors, use the “-f” option as follows:
$ bitbake -f -c do_cleansstate target
6.2.5 do_pydevshell
Starts a shell in which an interactive Python interpreter allows you to
interact with the BitBake build environment. From within this shell, you
can directly examine and set bits from the data store and execute
functions as if within the BitBake environment. See the “Using a Python Development Shell” section in
the Yocto Project Development Tasks Manual for more information about
using pydevshell
.
6.2.6 do_devshell
Starts a shell whose environment is set up for development, debugging,
or both. See the “Using a Development Shell” section in the
Yocto Project Development Tasks Manual for more information about using
devshell
.
6.2.7 do_listtasks
Lists all defined tasks for a target.
6.2.8 do_package_index
Creates or updates the index in the Package Feeds area.
Note
This task is not triggered with the bitbake -c
command-line option as
are the other tasks in this section. Because this task is specifically for
the package-index
recipe, you run it using bitbake package-index
.
7 devtool
Quick Reference
The devtool
command-line tool provides a number of features that
help you build, test, and package software. This command is available
alongside the bitbake
command. Additionally, the devtool
command
is a key part of the extensible SDK.
This chapter provides a Quick Reference for the devtool
command. For
more information on how to apply the command when using the extensible
SDK, see the “Using the Extensible SDK” chapter in the Yocto
Project Application Development and the Extensible Software Development
Kit (eSDK) manual.
7.1 Getting Help
The devtool
command line is organized similarly to Git in that it
has a number of sub-commands for each function. You can run
devtool --help
to see all the commands:
$ devtool -h
NOTE: Starting bitbake server...
usage: devtool [--basepath BASEPATH] [--bbpath BBPATH] [-d] [-q] [--color COLOR] [-h] <subcommand> ...
OpenEmbedded development tool
options:
--basepath BASEPATH Base directory of SDK / build directory
--bbpath BBPATH Explicitly specify the BBPATH, rather than getting it from the metadata
-d, --debug Enable debug output
-q, --quiet Print only errors
--color COLOR Colorize output (where COLOR is auto, always, never)
-h, --help show this help message and exit
subcommands:
Beginning work on a recipe:
add Add a new recipe
modify Modify the source for an existing recipe
upgrade Upgrade an existing recipe
Getting information:
status Show workspace status
latest-version Report the latest version of an existing recipe
check-upgrade-status Report upgradability for multiple (or all) recipes
search Search available recipes
Working on a recipe in the workspace:
build Build a recipe
rename Rename a recipe file in the workspace
edit-recipe Edit a recipe file
find-recipe Find a recipe file
configure-help Get help on configure script options
update-recipe Apply changes from external source tree to recipe
reset Remove a recipe from your workspace
finish Finish working on a recipe in your workspace
Testing changes on target:
deploy-target Deploy recipe output files to live target machine
undeploy-target Undeploy recipe output files in live target machine
build-image Build image including workspace recipe packages
Advanced:
create-workspace Set up workspace in an alternative location
extract Extract the source for an existing recipe
sync Synchronize the source tree for an existing recipe
menuconfig Alter build-time configuration for a recipe
import Import exported tar archive into workspace
export Export workspace into a tar archive
other:
selftest-reverse Reverse value (for selftest)
pluginfile Print the filename of this plugin
bbdir Print the BBPATH directory of this plugin
count How many times have this plugin been registered.
multiloaded How many times have this plugin been initialized
Use devtool <subcommand> --help to get help on a specific command
As directed in the general help output, you can
get more syntax on a specific command by providing the command name and
using --help
:
$ devtool add --help
NOTE: Starting bitbake server...
usage: devtool add [-h] [--same-dir | --no-same-dir] [--fetch URI] [--npm-dev] [--version VERSION] [--no-git] [--srcrev SRCREV | --autorev] [--srcbranch SRCBRANCH] [--binary] [--also-native] [--src-subdir SUBDIR] [--mirrors]
[--provides PROVIDES]
[recipename] [srctree] [fetchuri]
Adds a new recipe to the workspace to build a specified source tree. Can optionally fetch a remote URI and unpack it to create the source tree.
arguments:
recipename Name for new recipe to add (just name - no version, path or extension). If not specified, will attempt to auto-detect it.
srctree Path to external source tree. If not specified, a subdirectory of /media/build1/poky/build/workspace/sources will be used.
fetchuri Fetch the specified URI and extract it to create the source tree
options:
-h, --help show this help message and exit
--same-dir, -s Build in same directory as source
--no-same-dir Force build in a separate build directory
--fetch URI, -f URI Fetch the specified URI and extract it to create the source tree (deprecated - pass as positional argument instead)
--npm-dev For npm, also fetch devDependencies
--version VERSION, -V VERSION
Version to use within recipe (PV)
--no-git, -g If fetching source, do not set up source tree as a git repository
--srcrev SRCREV, -S SRCREV
Source revision to fetch if fetching from an SCM such as git (default latest)
--autorev, -a When fetching from a git repository, set SRCREV in the recipe to a floating revision instead of fixed
--srcbranch SRCBRANCH, -B SRCBRANCH
Branch in source repository if fetching from an SCM such as git (default master)
--binary, -b Treat the source tree as something that should be installed verbatim (no compilation, same directory structure). Useful with binary packages e.g. RPMs.
--also-native Also add native variant (i.e. support building recipe for the build host as well as the target machine)
--src-subdir SUBDIR Specify subdirectory within source tree to use
--mirrors Enable PREMIRRORS and MIRRORS for source tree fetching (disable by default).
--provides PROVIDES, -p PROVIDES
Specify an alias for the item provided by the recipe. E.g. virtual/libgl
7.2 The Workspace Layer Structure
devtool
uses a “Workspace” layer in which to accomplish builds. This
layer is not specific to any single devtool
command but is rather a
common working area used across the tool.
The following figure shows the workspace structure:
attic - A directory created if devtool believes it must preserve
anything when you run "devtool reset". For example, if you
run "devtool add", make changes to the recipe, and then
run "devtool reset", devtool takes notice that the file has
been changed and moves it into the attic should you still
want the recipe.
README - Provides information on what is in workspace layer and how to
manage it.
.devtool_md5 - A checksum file used by devtool.
appends - A directory that contains *.bbappend files, which point to
external source.
conf - A configuration directory that contains the layer.conf file.
recipes - A directory containing recipes. This directory contains a
folder for each directory added whose name matches that of the
added recipe. devtool places the recipe.bb file
within that sub-directory.
sources - A directory containing a working copy of the source files used
when building the recipe. This is the default directory used
as the location of the source tree when you do not provide a
source tree path. This directory contains a folder for each
set of source files matched to a corresponding recipe.
7.3 Adding a New Recipe to the Workspace Layer
Use the devtool add
command to add a new recipe to the workspace
layer. The recipe you add should not exist — devtool
creates it for
you. The source files the recipe uses should exist in an external area.
The following example creates and adds a new recipe named jackson
to
a workspace layer the tool creates. The source code built by the recipes
resides in /home/user/sources/jackson
:
$ devtool add jackson /home/user/sources/jackson
If you add a recipe and the workspace layer does not exist, the command creates the layer and populates it as described in “The Workspace Layer Structure” section.
Running devtool add
when the workspace layer exists causes the tool
to add the recipe, append files, and source files into the existing
workspace layer. The .bbappend
file is created to point to the
external source tree.
Note
If your recipe has runtime dependencies defined, you must be sure that these packages exist on the target hardware before attempting to run your application. If dependent packages (e.g. libraries) do not exist on the target, your application, when run, will fail to find those functions. For more information, see the “Deploying Your Software on the Target Machine” section.
By default, devtool add
uses the latest revision (i.e. master) when
unpacking files from a remote URI. In some cases, you might want to
specify a source revision by branch, tag, or commit hash. You can
specify these options when using the devtool add
command:
To specify a source branch, use the
--srcbranch
option:$ devtool add --srcbranch scarthgap jackson /home/user/sources/jackson
In the previous example, you are checking out the scarthgap branch.
To specify a specific tag or commit hash, use the
--srcrev
option:$ devtool add --srcrev yocto-5.0.999 jackson /home/user/sources/jackson $ devtool add --srcrev some_commit_hash /home/user/sources/jackson
The previous examples check out the yocto-5.0.999 tag and the commit associated with the some_commit_hash hash.
Note
If you prefer to use the latest revision every time the recipe is
built, use the options --autorev
or -a
.
7.4 Extracting the Source for an Existing Recipe
Use the devtool extract
command to extract the source for an
existing recipe. When you use this command, you must supply the root
name of the recipe (i.e. no version, paths, or extensions), and you must
supply the directory to which you want the source extracted.
Additional command options let you control the name of a development branch into which you can checkout the source and whether or not to keep a temporary directory, which is useful for debugging.
7.5 Synchronizing a Recipe’s Extracted Source Tree
Use the devtool sync
command to synchronize a previously extracted
source tree for an existing recipe. When you use this command, you must
supply the root name of the recipe (i.e. no version, paths, or
extensions), and you must supply the directory to which you want the
source extracted.
Additional command options let you control the name of a development branch into which you can checkout the source and whether or not to keep a temporary directory, which is useful for debugging.
7.6 Modifying an Existing Recipe
Use the devtool modify
command to begin modifying the source of an
existing recipe. This command is very similar to the
add command
except that it does not physically create the recipe in the workspace
layer because the recipe already exists in an another layer.
The devtool modify
command extracts the source for a recipe, sets it
up as a Git repository if the source had not already been fetched from
Git, checks out a branch for development, and applies any patches from
the recipe as commits on top. You can use the following command to
checkout the source files:
$ devtool modify recipe
Using the above command form, devtool
uses the existing recipe’s
SRC_URI statement to locate the upstream source,
extracts the source into the default sources location in the workspace.
The default development branch used is “devtool”.
7.7 Edit an Existing Recipe
Use the devtool edit-recipe
command to run the default editor, which
is identified using the EDITOR
variable, on the specified recipe.
When you use the devtool edit-recipe
command, you must supply the
root name of the recipe (i.e. no version, paths, or extensions). Also,
the recipe file itself must reside in the workspace as a result of the
devtool add
or devtool upgrade
commands.
7.8 Updating a Recipe
Use the devtool update-recipe
command to update your recipe with
patches that reflect changes you make to the source files. For example,
if you know you are going to work on some code, you could first use the
devtool modify command to extract
the code and set up the workspace. After which, you could modify,
compile, and test the code.
When you are satisfied with the results and you have committed your
changes to the Git repository, you can then run the
devtool update-recipe
to create the patches and update the recipe:
$ devtool update-recipe recipe
If you run the devtool update-recipe
without committing your changes, the command ignores the changes.
Often, you might want to apply customizations made to your software in
your own layer rather than apply them to the original recipe. If so, you
can use the -a
or --append
option with the
devtool update-recipe
command. These options allow you to specify
the layer into which to write an append file:
$ devtool update-recipe recipe -a base-layer-directory
The *.bbappend
file is created at the
appropriate path within the specified layer directory, which may or may
not be in your bblayers.conf
file. If an append file already exists,
the command updates it appropriately.
7.9 Checking on the Upgrade Status of a Recipe
Upstream recipes change over time. Consequently, you might find that you need to determine if you can upgrade a recipe to a newer version.
To check on the upgrade status of a recipe, you can use the
devtool latest-version recipe
command, which quickly shows the current
version and the latest version available upstream. To get a more global
picture, use the devtool check-upgrade-status
command, which takes a
list of recipes as input, or no arguments, in which case it checks all
available recipes. This command will only print the recipes for which
a new upstream version is available. Each such recipe will have its current
version and latest upstream version, as well as the email of the maintainer
and any additional information such as the commit hash or reason for not
being able to upgrade it, displayed in a table.
This upgrade checking mechanism relies on the optional UPSTREAM_CHECK_URI, UPSTREAM_CHECK_REGEX, UPSTREAM_CHECK_GITTAGREGEX, UPSTREAM_CHECK_COMMITS and UPSTREAM_VERSION_UNKNOWN variables in package recipes.
Note
Most of the time, the above variables are unnecessary. They are only required when upstream does something unusual, and default mechanisms cannot find the new upstream versions.
For the
oe-core
layer, recipe maintainers come from the maintainers.inc file.If the recipe is using the Git Fetcher (git://) rather than a tarball, the commit hash points to the commit that matches the recipe’s latest version tag, or in the absence of suitable tags, to the latest commit (when UPSTREAM_CHECK_COMMITS set to
1
in the recipe).
As with all devtool
commands, you can get help on the individual
command:
$ devtool check-upgrade-status -h
NOTE: Starting bitbake server...
usage: devtool check-upgrade-status [-h] [--all] [recipe [recipe ...]]
Prints a table of recipes together with versions currently provided by recipes, and latest upstream versions, when there is a later version available
arguments:
recipe Name of the recipe to report (omit to report upgrade info for all recipes)
options:
-h, --help show this help message and exit
--all, -a Show all recipes, not just recipes needing upgrade
Unless you provide a specific recipe name on the command line, the command checks all recipes in all configured layers.
Here is a partial example table that reports on all the recipes:
$ devtool check-upgrade-status
...
INFO: bind 9.16.20 9.16.21 Armin Kuster <akuster808@gmail.com>
INFO: inetutils 2.1 2.2 Tom Rini <trini@konsulko.com>
INFO: iproute2 5.13.0 5.14.0 Changhyeok Bae <changhyeok.bae@gmail.com>
INFO: openssl 1.1.1l 3.0.0 Alexander Kanavin <alex.kanavin@gmail.com>
INFO: base-passwd 3.5.29 3.5.51 Anuj Mittal <anuj.mittal@intel.com> cannot be updated due to: Version 3.5.38 requires cdebconf for update-passwd utility
...
Notice the reported reason for not upgrading the base-passwd
recipe.
In this example, while a new version is available upstream, you do not
want to use it because the dependency on cdebconf
is not easily
satisfied. Maintainers can explicit the reason that is shown by adding
the RECIPE_NO_UPDATE_REASON variable to the corresponding recipe.
See base-passwd.bb
for an example:
RECIPE_NO_UPDATE_REASON = "Version 3.5.38 requires cdebconf for update-passwd utility"
Last but not least, you may set UPSTREAM_VERSION_UNKNOWN to 1
in a recipe when there’s currently no way to determine its latest upstream
version.
7.10 Upgrading a Recipe
As software matures, upstream recipes are upgraded to newer versions. As
a developer, you need to keep your local recipes up-to-date with the
upstream version releases. There are several ways of upgrading recipes.
You can read about them in the “Upgrading Recipes”
section of the Yocto Project Development Tasks Manual. This section
overviews the devtool upgrade
command.
Before you upgrade a recipe, you can check on its upgrade status. See the “Checking on the Upgrade Status of a Recipe” section for more information.
The devtool upgrade
command upgrades an existing recipe to a more
recent version of the recipe upstream. The command puts the upgraded
recipe file along with any associated files into a “workspace” and, if
necessary, extracts the source tree to a specified location. During the
upgrade, patches associated with the recipe are rebased or added as
needed.
When you use the devtool upgrade
command, you must supply the root
name of the recipe (i.e. no version, paths, or extensions), and you must
supply the directory to which you want the source extracted. Additional
command options let you control things such as the version number to
which you want to upgrade (i.e. the PV), the source
revision to which you want to upgrade (i.e. the
SRCREV), whether or not to apply patches, and so
forth.
You can read more on the devtool upgrade
workflow in the
“Use devtool upgrade to Create a Version of the Recipe that Supports a Newer Version of the Software”
section in the Yocto Project Application Development and the Extensible
Software Development Kit (eSDK) manual. You can also see an example of
how to use devtool upgrade
in the “Using devtool upgrade”
section in the Yocto Project Development Tasks Manual.
7.11 Resetting a Recipe
Use the devtool reset
command to remove a recipe and its
configuration (e.g. the corresponding .bbappend
file) from the
workspace layer. Realize that this command deletes the recipe and the
append file. The command does not physically move them for you.
Consequently, you must be sure to physically relocate your updated
recipe and the append file outside of the workspace layer before running
the devtool reset
command.
If the devtool reset
command detects that the recipe or the append
files have been modified, the command preserves the modified files in a
separate “attic” subdirectory under the workspace layer.
Here is an example that resets the workspace directory that contains the
mtr
recipe:
$ devtool reset mtr
NOTE: Cleaning sysroot for recipe mtr...
NOTE: Leaving source tree /home/scottrif/poky/build/workspace/sources/mtr as-is; if you no longer need it then please delete it manually
$
7.12 Building Your Recipe
Use the devtool build
command to build your recipe. The
devtool build
command is equivalent to the
bitbake -c populate_sysroot
command.
When you use the devtool build
command, you must supply the root
name of the recipe (i.e. do not provide versions, paths, or extensions).
You can use either the -s
or the --disable-parallel-make
options to
disable parallel makes during the build. Here is an example:
$ devtool build recipe
7.13 Building Your Image
Use the devtool build-image
command to build an image, extending it
to include packages from recipes in the workspace. Using this command is
useful when you want an image that ready for immediate deployment onto a
device for testing. For proper integration into a final image, you need
to edit your custom image recipe appropriately.
When you use the devtool build-image
command, you must supply the
name of the image. This command has no command line options:
$ devtool build-image image
7.14 Deploying Your Software on the Target Machine
Use the devtool deploy-target
command to deploy the recipe’s build
output to the live target machine:
$ devtool deploy-target recipe target
The target is the address of the target machine, which must be running
an SSH server (i.e. user@hostname[:destdir]
).
This command deploys all files installed during the do_install task. Furthermore, you do not need to have package management enabled within the target machine. If you do, the package manager is bypassed.
Note
The deploy-target
functionality is for development only. You
should never use it to update an image that will be used in
production.
Some conditions could prevent a deployed application from behaving as expected. When both of the following conditions are met, your application has the potential to not behave correctly when run on the target:
You are deploying a new application to the target and the recipe you used to build the application had correctly defined runtime dependencies.
The target does not physically have the packages on which the application depends installed.
If both of these conditions are met, your application will not behave as
expected. The reason for this misbehavior is because the
devtool deploy-target
command does not deploy the packages (e.g.
libraries) on which your new application depends. The assumption is that
the packages are already on the target. Consequently, when a runtime
call is made in the application for a dependent function (e.g. a library
call), the function cannot be found.
To be sure you have all the dependencies local to the target, you need to be sure that the packages are pre-deployed (installed) on the target before attempting to run your application.
7.15 Removing Your Software from the Target Machine
Use the devtool undeploy-target
command to remove deployed build
output from the target machine. For the devtool undeploy-target
command to work, you must have previously used the
“devtool deploy-target”
command:
$ devtool undeploy-target recipe target
The target is the
address of the target machine, which must be running an SSH server (i.e.
user@hostname
).
7.16 Creating the Workspace Layer in an Alternative Location
Use the devtool create-workspace
command to create a new workspace
layer in your Build Directory. When you create a
new workspace layer, it is populated with the README
file and the
conf
directory only.
The following example creates a new workspace layer in your current working and by default names the workspace layer “workspace”:
$ devtool create-workspace
You can create a workspace layer anywhere by supplying a pathname with the command. The following command creates a new workspace layer named “new-workspace”:
$ devtool create-workspace /home/scottrif/new-workspace
7.17 Get the Status of the Recipes in Your Workspace
Use the devtool status
command to list the recipes currently in your
workspace. Information includes the paths to their respective external
source trees.
The devtool status
command has no command-line options:
$ devtool status
Here is sample output after using
devtool add
to create and add the mtr_0.86.bb
recipe to the workspace
directory:
$ devtool status
mtr:/home/scottrif/poky/build/workspace/sources/mtr (/home/scottrif/poky/build/workspace/recipes/mtr/mtr_0.86.bb)
$
7.18 Search for Available Target Recipes
Use the devtool search
command to search for available target
recipes. The command matches the recipe name, package name, description,
and installed files. The command displays the recipe name as a result of
a match.
When you use the devtool search
command, you must supply a keyword.
The command uses the keyword when searching for a match.
8 OpenEmbedded Kickstart (.wks
) Reference
8.1 Introduction
The current Wic implementation supports only the basic kickstart
partitioning commands: partition
(or part
for short) and
bootloader
.
Note
Future updates will implement more commands and options. If you use anything that is not specifically supported, results can be unpredictable.
This chapter provides a reference on the available kickstart commands. The information lists the commands, their syntax, and meanings. Kickstart commands are based on the Fedora kickstart versions but with modifications to reflect Wic capabilities. You can see the original documentation for those commands at the following link: https://pykickstart.readthedocs.io/en/latest/kickstart-docs.html
8.2 Command: part or partition
Either of these commands creates a partition on the system and uses the following syntax:
part [mntpoint]
partition [mntpoint]
If you do not provide mntpoint, Wic creates a partition but does not mount it.
The mntpoint
is where the partition is mounted and must be in one of
the following forms:
/path
: For example, “/”, “/usr”, or “/home”swap
: The created partition is used as swap space
Specifying a mntpoint causes the partition to automatically be mounted.
Wic achieves this by adding entries to the filesystem table (fstab)
during image generation. In order for Wic to generate a valid fstab, you
must also provide one of the --ondrive
, --ondisk
, or
--use-uuid
partition options as part of the command.
Note
The mount program must understand the PARTUUID syntax you use with
--use-uuid
and non-root mountpoint, including swap. The default
configuration of BusyBox in OpenEmbedded supports this, but this may
be disabled in custom configurations.
Here is an example that uses “/” as the mountpoint. The command uses
--ondisk
to force the partition onto the sdb
disk:
part / --source rootfs --ondisk sdb --fstype=ext3 --label platform --align 1024
Here is a list that describes other supported options you can use with
the part
and partition
commands:
--size
: The minimum partition size. Specify as an integer value optionally followed by one of the units “k” / “K” for kibibyte, “M” for mebibyte and “G” for gibibyte. The default unit if none is given is “M”. You do not need this option if you use--source
.--fixed-size
: The exact partition size. Specify as an integer value optionally followed by one of the units “k” / “K” for kibibyte, “M” for mebibyte and “G” for gibibyte. The default unit if none is given is “M”. Cannot be specify together with--size
. An error occurs when assembling the disk image if the partition data is larger than--fixed-size
.--source
: This option is a Wic-specific option that names the source of the data that populates the partition. The most common value for this option is “rootfs”, but you can use any value that maps to a valid source plugin. For information on the source plugins, see the “Using the Wic Plugin Interface” section in the Yocto Project Development Tasks Manual.If you use
--source rootfs
, Wic creates a partition as large as needed and fills it with the contents of the root filesystem pointed to by the-r
command-line option or the equivalent root filesystem derived from the-e
command-line option. The filesystem type used to create the partition is driven by the value of the--fstype
option specified for the partition. See the entry on--fstype
that follows for more information.If you use
--source plugin-name
, Wic creates a partition as large as needed and fills it with the contents of the partition that is generated by the specified plugin name using the data pointed to by the-r
command-line option or the equivalent root filesystem derived from the-e
command-line option. Exactly what those contents are and filesystem type used are dependent on the given plugin implementation.If you do not use the
--source
option, thewic
command creates an empty partition. Consequently, you must use the--size
option to specify the size of the empty partition.--ondisk
or--ondrive
: Forces the partition to be created on a particular disk.--fstype
: Sets the file system type for the partition. Valid values are:btrfs
erofs
ext2
ext3
ext4
squashfs
swap
vfat
--fsoptions
: Specifies a free-form string of options to be used when mounting the filesystem. This string is copied into the/etc/fstab
file of the installed system and should be enclosed in quotes. If not specified, the default string is “defaults”.--label label
: Specifies the label to give to the filesystem to be made on the partition. If the given label is already in use by another filesystem, a new label is created for the partition.--active
: Marks the partition as active.--align (in KBytes)
: This option is a Wic-specific option that says to start partitions on boundaries given x KBytes.--offset
: This option is a Wic-specific option that says to place a partition at exactly the specified offset. If the partition cannot be placed at the specified offset, the image build will fail. Specify as an integer value optionally followed by one of the units “s” / “S” for 512 byte sector, “k” / “K” for kibibyte, “M” for mebibyte and “G” for gibibyte. The default unit if none is given is “k”.--no-table
: This option is a Wic-specific option. Using the option reserves space for the partition and causes it to become populated. However, the partition is not added to the partition table.--exclude-path
: This option is a Wic-specific option that excludes the given relative path from the resulting image. This option is only effective with the rootfs source plugin.--extra-space
: This option is a Wic-specific option that adds extra space after the space filled by the content of the partition. The final size can exceed the size specified by the--size
option. The default value is 10M. Specify as an integer value optionally followed by one of the units “k” / “K” for kibibyte, “M” for mebibyte and “G” for gibibyte. The default unit if none is given is “M”.--overhead-factor
: This option is a Wic-specific option that multiplies the size of the partition by the option’s value. You must supply a value greater than or equal to “1”. The default value is “1.3”.--part-name
: This option is a Wic-specific option that specifies a name for GPT partitions.--part-type
: This option is a Wic-specific option that specifies the partition type globally unique identifier (GUID) for GPT partitions. You can find the list of partition type GUIDs at https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs.--use-uuid
: This option is a Wic-specific option that causes Wic to generate a random GUID for the partition. The generated identifier is used in the bootloader configuration to specify the root partition.--uuid
: This option is a Wic-specific option that specifies the partition UUID.--fsuuid
: This option is a Wic-specific option that specifies the filesystem UUID. You can generate or modify WKS_FILE with this option if a preconfigured filesystem UUID is added to the kernel command line in the bootloader configuration before you run Wic.--system-id
: This option is a Wic-specific option that specifies the partition system ID, which is a one byte long, hexadecimal parameter with or without the 0x prefix.--mkfs-extraopts
: This option specifies additional options to pass to themkfs
utility. Some default options for certain filesystems do not take effect. See Wic’s help on kickstart (i.e.wic help kickstart
).
8.3 Command: bootloader
This command specifies how the bootloader should be configured and supports the following options:
Note
Bootloader functionality and boot partitions are implemented by the various source plugins that implement bootloader functionality. The bootloader command essentially provides a means of modifying bootloader configuration.
--append
: Specifies kernel parameters. These parameters will be added to the syslinux APPEND orgrub
kernel command line.--configfile
: Specifies a user-defined configuration file for the bootloader. You can provide a full pathname for the file or a file located in thecanned-wks
folder. This option overrides all other bootloader options.--ptable
: Specifies the partition table format. Valid values are:msdos
gpt
--timeout
: Specifies the number of seconds before the bootloader times out and boots the default option.
9 QA Error and Warning Messages
9.1 Introduction
When building a recipe, the OpenEmbedded build system performs various QA checks on the output to ensure that common issues are detected and reported. Sometimes when you create a new recipe to build new software, it will build with no problems. When this is not the case, or when you have QA issues building any software, it could take a little time to resolve them.
While it is tempting to ignore a QA message or even to disable QA checks, it is best to try and resolve any reported QA issues. This chapter provides a list of the QA messages and brief explanations of the issues you could encounter so that you can properly resolve problems.
The next section provides a list of all QA error and warning messages based on a default configuration. Each entry provides the message or error form along with an explanation.
Note
At the end of each message, the name of the associated QA test (as listed in the “insane” section) appears within square brackets.
As mentioned, this list of error and warning messages is for QA checks only. The list does not cover all possible build errors or warnings you could encounter.
Because some QA checks are disabled by default, this list does not include all possible QA check errors and warnings.
9.2 Errors and Warnings
<packagename>: <path> is using libexec please relocate to <libexecdir> [libexec]
The specified package contains files in
/usr/libexec
when the distro configuration uses a different path for<libexecdir>
By default,<libexecdir>
is$prefix/libexec
. However, this default can be changed (e.g.${libdir}
).
package <packagename> contains bad RPATH <rpath> in file <file> [rpaths]
The specified binary produced by the recipe contains dynamic library load paths (rpaths) that contain build system paths such as TMPDIR, which are incorrect for the target and could potentially be a security issue. Check for bad
-rpath
options being passed to the linker in your do_compile log. Depending on the build system used by the software being built, there might be a configure option to disable rpath usage completely within the build of the software.
<packagename>: <file> contains probably-redundant RPATH <rpath> [useless-rpaths]
The specified binary produced by the recipe contains dynamic library load paths (rpaths) that on a standard system are searched by default by the linker (e.g.
/lib
and/usr/lib
). While these paths will not cause any breakage, they do waste space and are unnecessary. Depending on the build system used by the software being built, there might be a configure option to disable rpath usage completely within the build of the software.
<packagename> requires <files>, but no providers in its RDEPENDS [file-rdeps]
A file-level dependency has been identified from the specified package on the specified files, but there is no explicit corresponding entry in RDEPENDS. If particular files are required at runtime then RDEPENDS should be declared in the recipe to ensure the packages providing them are built.
<packagename1> rdepends on <packagename2>, but it isn't a build dependency? [build-deps]
There is a runtime dependency between the two specified packages, but there is nothing explicit within the recipe to enable the OpenEmbedded build system to ensure that dependency is satisfied. This condition is usually triggered by an RDEPENDS value being added at the packaging stage rather than up front, which is usually automatic based on the contents of the package. In most cases, you should change the recipe to add an explicit RDEPENDS for the dependency.
non -dev/-dbg/nativesdk- package contains symlink .so: <packagename> path '<path>' [dev-so]
Symlink
.so
files are for development only, and should therefore go into the-dev
package. This situation might occur if you add*.so*
rather than*.so.*
to a non-dev package. Change FILES (and possibly PACKAGES) such that the specified.so
file goes into an appropriate-dev
package.
non -staticdev package contains static .a library: <packagename> path '<path>' [staticdev]
Static
.a
library files should go into a-staticdev
package. Change FILES (and possibly PACKAGES) such that the specified.a
file goes into an appropriate-staticdev
package.
<packagename>: found library in wrong location [libdir]
The specified file may have been installed into an incorrect (possibly hardcoded) installation path. For example, this test will catch recipes that install
/lib/bar.so
when${base_libdir}
is “lib32”. Another example is when recipes install/usr/lib64/foo.so
when${libdir}
is “/usr/lib”. False positives occasionally exist. For these cases add “libdir” to INSANE_SKIP for the package.
non debug package contains .debug directory: <packagename> path <path> [debug-files]
The specified package contains a
.debug
directory, which should not appear in anything but the-dbg
package. This situation might occur if you add a path which contains a.debug
directory and do not explicitly add the.debug
directory to the-dbg
package. If this is the case, add the.debug
directory explicitly toFILES:${PN}-dbg
. See FILES for additional information on FILES.
<packagename> installs files in <path>, but it is expected to be empty [empty-dirs]
The specified package is installing files into a directory that is normally expected to be empty (such as
/tmp
). These files may be more appropriately installed to a different location, or perhaps alternatively not installed at all, usually by updating the do_install task/function.
Architecture did not match (<file_arch>, expected <machine_arch>) in <file> [arch]
By default, the OpenEmbedded build system checks the Executable and Linkable Format (ELF) type, bit size, and endianness of any binaries to ensure they match the target architecture. This test fails if any binaries do not match the type since there would be an incompatibility. The test could indicate that the wrong compiler or compiler options have been used. Sometimes software, like bootloaders, might need to bypass this check. If the file you receive the error for is firmware that is not intended to be executed within the target operating system or is intended to run on a separate processor within the device, you can add “arch” to INSANE_SKIP for the package. Another option is to check the do_compile log and verify that the compiler options being used are correct.
Bit size did not match (<file_bits>, expected <machine_bits>) in <file> [arch]
By default, the OpenEmbedded build system checks the Executable and Linkable Format (ELF) type, bit size, and endianness of any binaries to ensure they match the target architecture. This test fails if any binaries do not match the type since there would be an incompatibility. The test could indicate that the wrong compiler or compiler options have been used. Sometimes software, like bootloaders, might need to bypass this check. If the file you receive the error for is firmware that is not intended to be executed within the target operating system or is intended to run on a separate processor within the device, you can add “arch” to INSANE_SKIP for the package. Another option is to check the do_compile log and verify that the compiler options being used are correct.
Endianness did not match (<file_endianness>, expected <machine_endianness>) in <file> [arch]
By default, the OpenEmbedded build system checks the Executable and Linkable Format (ELF) type, bit size, and endianness of any binaries to ensure they match the target architecture. This test fails if any binaries do not match the type since there would be an incompatibility. The test could indicate that the wrong compiler or compiler options have been used. Sometimes software, like bootloaders, might need to bypass this check. If the file you receive the error for is firmware that is not intended to be executed within the target operating system or is intended to run on a separate processor within the device, you can add “arch” to INSANE_SKIP for the package. Another option is to check the do_compile log and verify that the compiler options being used are correct.
ELF binary '<file>' has relocations in .text [textrel]
The specified ELF binary contains relocations in its
.text
sections. This situation can result in a performance impact at runtime.Typically, the way to solve this performance issue is to add “-fPIC” or “-fpic” to the compiler command-line options. For example, given software that reads CFLAGS when you build it, you could add the following to your recipe:
CFLAGS:append = " -fPIC "
For more information on text relocations at runtime, see https://www.akkadia.org/drepper/textrelocs.html.
File '<file>' in package '<package>' doesn't have