Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Upgrading the GNU C Library within Red Hat Enterprise Linux

February 17, 2016
Florian Weimer
Related topics:
ContainersDeveloper ToolsLinux
Related products:
Developer ToolsRed Hat Enterprise Linux

Share:

    Occasionally, there's a need for a new GNU C Library for a given application to run.  For example, some versions of the Google Chrome browser started to warn users on Red Hat Enterprise Linux 7 that future versions of Chrome would not support their operating system. The Chromium source code contained a version check, flagging all versions of the GNU C Library (glibc) older than 2.19 as obsolete. This check has since been relaxed to 2.17 (the version in Red Hat Enterprise Linux 7), but it is still worth discussing what we can do to support application binaries in Red Hat Enterprise Linux which require a newer glibc version to run.

    Distribution-specific binaries

    Before discussing the feasibility of glibc upgrades, it is worth noting that there is a disconnect between how GNU/Linux distributions build the applications they ship as part of the distribution, and how independent software vendors (ISVs) build their application binaries.

    The current GNU/Linux development model strongly encourages that programs are recompiled for each distribution on which they are run. This model is not popular with independent software vendors who ship pre-compiled binaries: usually, they only want to produce one binary for each architecture, not one binary per architecture and distribution. It is not completely clear why this so: You generally need two builds today (for i386 and x86_64), and even a single binary would still need testing on all supported GNU/Linux distributions because of differences in system libraries and deployed kernel versions, so there is little reduction on the QA side.

    However, cross-distribution execution of programs (that is, running programs built on Red Hat Enterprise Linux on Debian, or vice versa) is currently a reality. One consequence is that solutions to the glibc upgrade problem which require recompilation or relinking (such as Red Hat Developer Toolset) do not address the core issue.

    Rebasing to a new upstream version

    Conceptually, the simplest approach is to completely replace the glibc package with one which is based on a more recent glibc release from the GNU project. This way, the system offers all glibc functionality present in the newer release. This is not impossible to do, but it is still a fairly disruptive change:

    • glibc provides only limited support for static linking. Most programs are dynamically linked, so a glibc update directly affects them. This is true both for operating system components and software not provided by Red Hat.
    • Even statically linked binaries can break during glibc upgrades if they use the Name Service Switch (NSS). Static linking of glibc is not supported on Red Hat Enterprise Linux, but the potential breakage is nevertheless a reason to minimize changes in this area.
    • Static libraries depend on the exact glibc version used to compile their object files. All object files linked statically into an application must have been compiled against the same versions of the glibc headers.  Usually, mixing different builds does work, but there are occasional exceptions.
    • New glibc versions sometimes remove problematic programming interfaces. Old binaries continue to run, but recompiling them from source could fail. This could be disruptive to customer software development.
    • glibc does not provide absolute bug-for-bug compatibility. Applications rely on unspecified behavior which changes during a glibc update, such as the exact return value of the strcmp function or accepted salt bytes for the crypt function.  We sometimes make such changes within minor releases of Red Hat Enterprise Linux if there are significant performance improvements or compliance issues, but we generally try to minimize them.
    • With the large number of changes in a new glibc release, the risk of regressions increases.
    • Plug-in frameworks offer additional challenges. Libraries can use new functionality and new symbols, provided by the new glibc version. But these symbols may also be defined by applications for a different purpose. This triggers ELF symbol interposition, resulting in compatibility problems: The library will assume that the symbols are implemented by glibc, and expect a specific behavior of them, but the definitions in the applications may have completely different behavior, and the due to the way ELF linking works, these definitions are preferred over those provided by glibc (getline or snprintf are examples where this could happen because they were added to POSIX relatively late).

    Taken together, these challenges rule out rebasing to a new upstream version.

    Backporting new APIs

    We could extract the implementation of specific functionality from newer upstream glibc versions and incorporate it along with the older code in the Red Hat Enterprise Linux glibc version.  We constantly do this for important bug fixes and performance enhancements. But adding new externally visible interfaces is different. The problem related to dynamic linking and plug-in frameworks still exists.

    For RPM-based distributions, there is an additional challenge: RPM extracts dependency information from symbols and their versions. For example, glibc 2.17 added a function symbol secure_getenv@GLIBC_2.17, and let's assume we want to backport that to Red Hat Enterprise Linux 6 (which is based on glibc 2.12). RPM extracts the “GLIBC_2.17” version from this symbol and derives the capability “libc.so.6(GLIBC_2.17)(64bit)” from that. The glibc package provides this capability. For an application which references the secure_getenv function, RPM will require the libc.so.6(GLIBC_2.17)(64bit) capability. For consistency with upstream and binaries compiled on other distributions, we have to backport the secure_getenv symbol under the same symbol version. But this hypothetical new glibc version cannot provide the capability libc.so.6(GLIBC_2.17)(64bit) because in version 2.17, glibc added additional functions to libc.so.6. These additional functions are not part of the backport, but their presence is implied by the capability string. As a result, the RPM dependency information would become misleading and unreliable. There is no fundamental technical reason why RPM dependencies have to be generated in this way, but it is the way the dependency generator currently works. (For example, Debian uses symbols files to automatically relax package dependencies to the oldest possible version.)

    However, Chrome just checks the glibc version, without examining which functionality is actually needed. Mere backporting of interfaces will not make the system glibc acceptable to software which performs such explicit version checks.

    glibc as a software collection

    Software Collections are a way to replace system components with updated versions, without altering the core system environment. They are designed explicitly for parallel installation. The new components are injected through shell environment variables, namely PATH (for commands) and LD_LIBRARY_PATH (for dynamic libraries). This approach does not work for glibc because every binary hard-codes the path to the ELF interpreter (which is conceptually similar to the script interpreter specified on the “#!” line at the beginning of a script). On x86_64 systems, the ELF interpreter name /lib64/ld-linux-x86-64.so.2. The interpreter name is specified as part of the platform ABI, and it does not change from glibc release to glibc release. However, the interpreter is an integral part of glibc and has to be upgraded in lock-step with the rest of the glibc (notably, libc.so.6). Without additional measures, the kernel will always use the system ELF interpreter to load programs, and this interpreter is unlikely to be able to load a glibc version provided as a software collection and stored in some directory references in the LD_LIBRARY_PATH environment variable. A potential approach might be to replace the named loader with a proxy loader that can inspect application metadata to decide which real loader to use instead of the default one. The downside is that unmarked applications can’t use this method. There is currently no support for a proxy loader, nor any definition of the metadata that might be used to run this process.

    With a change to the application build process, a different ELF interpreter could be used (this time under /opt, like the rest of software collections). But this needs relinking application binaries (or at least patching them in place, which can be difficult of the new interpreter name is longer), or adding wrapper scripts. This means that this approach again does not address the core issue.

    Without patching binaries, the last remaining option is to tell the kernel to rewrite the ELF interpreter path to something else (similar to the proxy loader option). There is currently no direct support for that, but it is possible to use a mount namespace, a bind mount on the /lib64 directory (to preserve access to other system libraries), and a file bind mount over /lib64/ld-linux-x86-64.so.2, to replace the ELF interpreter. This peculiar file system view will be inherited by child processes (including system commands), and they will run with a newer glibc version as well. This notion is similar in nature to containers, but without further namespace separation, and at this point one might simply want to use containers and all the tooling that comes with them.

    All these approaches require several changes to glibc itself, to alter paths and disentangle them from the system glibc installation. These changes may cause compatibility problems with some applications.

    Would the namespace or proxy loader approach work with web browsers? Browsers are somewhat special. Browsers try very hard to protect users from malicious content.  For this, they apply various forms of sandboxing technology. Some of these sandboxes (such as the Google Chrome sandbox for Flash) require kernel assistance. Historically, requesting fewer privileges from the kernel itself required increased privileges, beyond what a regular user has by default, which is why sandboxes require SUID binaries. But SUID binaries do not work with LD_LIBRARY_PATH for security reasons, so the traditional software collections approach fails here.

    glibc as part of Red Hat Developer Toolset

    Red Hat Developer Toolset is a solution for using current developer tools on previously released versions of Red Hat Enterprise Linux.  For example, this allows compiling C++11 applications on Red Hat Enterprise Linux 6, and the resulting binaries will run on on Red Hat Enterprise Linux 6 and 7.  At run time, Developer Toolset components are not required, due the use of static linking of newer language support libraries (software collections used to build the binaries can still introduce a run-time dependency on those software collections).

    Current versions of Developer Toolset do not support newer glibc interfaces. Adding Developer Toolset support for completely new glibc versions is challenging because glibc does fully support static linking. A technically more feasible approach would add only additional APIs (similar to backporting above), and link their implementations statically, but otherwise relying on the system glibc version (which also mirrors what Developer Toolset already does for the C++ run-time library, for example). This is particularly attractive for system call wrappers added in newer glibc versions because most of them do not depend on other parts of the glibc.

    The Developer Toolset approach still requires recompilation, so it does not provide a way to run foreign binaries compiled for other distributions. In other words, Google would have to use Red Hat Developer Toolset to compile Chrome.

    Using containers or virtualization

    This approach uses a  separate operating system image, perhaps even the original environment that was used for building the application which introduced the dependency on the newer glibc version. The challenge here is how this can be supported: a Red Hat Enterprise Linux user has support for Red Hat Enterprise Linux, and an ISV provides support for their application, but not the operating system itself. This means there is no support coverage for the operating system environment inside the container (or the virtualized operating system), as long as the environment is not a newer version of Red Hat Enterprise Linux itself.

    Currently, this approach works (within the limits of the container and virtualization solution). But as mentioned above, browsers are special. They increasingly try to protect content from nosy users. This means that even open-source browsers have components which cannot be rebuilt without losing access to certain online content (such as video streaming). The browsers increasingly use cryptography to verify the integrity of program code, and may even start to probe the environment for the presence of virtualization or containers (to prevent users from capturing videos, for example). Preventing cheating in online games is another area where browsers could begin probing the environment and restrict functionality when not running directly on top of a host operating system. This means that containers and virtualization may not be a long-term option for running web browsers. All of this assumes containers have complete media integration, which at the time of writing is not entirely the case.

    Conclusion

    We looked at various ways how we could add newer versions of the GNU C Library to existing versions of Red Hat Enterprise Linux. Rebasing to a new upstream version is too risky. An approach similar to software collections will not work with all independently compiled binaries.

    For software developers who build their applications on Red Hat Enterprise Linux and who need access to additional glibc functionality such as new system call wrappers, the selective static linking approach used in Red Hat Developer Toolset is most straightforward to implement. A proxy loader that enforces software-collection isolation policies is another approach. If we can change the way RPM generates dependencies for dynamic shared objects and address the plug-in issue, backports of new functions might be feasible as well.

    For executing existing binaries originally compiled on other GNU/Linux distributions, against a newer glibc version, there is currently no fully supported solution. From a practical point of view, running a container or virtual machine with the other operating system will work in many cases, but there will be a gap in support.

    Last updated: November 1, 2023

    Recent Posts

    • Meet the Red Hat Node.js team at PowerUP 2025

    • How to use pipelines for AI/ML automation at the edge

    • What's new in network observability 1.8

    • LLM Compressor: Optimize LLMs for low-latency deployments

    • How to set up NVIDIA NIM on Red Hat OpenShift AI

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue

    OSZAR »