Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Monitor Node.js applications on Red Hat OpenShift with Prometheus

March 22, 2021
Alexandros Alykiotis
Related topics:
ContainersKubernetesNode.jsObservability
Related products:
Red Hat build of Node.jsRed Hat OpenShift Container Platform

Share:

    A great thing about Node.js is how well it performs inside a container. With the shift to containerized deployments and environments comes extra complexity. One such complexity is observing what’s going on within your application and its resources, and when resource use is outside of the expected norms.

    Prometheus is a tool that developers can use to increase observability. It is an installable service that gathers instrumentation metrics from your applications and stores them as time-series data. Prometheus is advanced and battle-tested, and a great option for Node.js applications running inside of a container.

    Default and custom instrumentation

    For your application to feed metrics to Prometheus, it must expose a metrics endpoint. For a Node.js application, the best way to expose the metrics endpoint is to use the prom-client module available from the Node Package Manager (NPM) registry. The prom-client module exposes all of the default metrics recommended by Prometheus.

    The defaults include metrics such as process_cpu_seconds_total and process_heap_bytes. In addition to exposing default metrics, prom-client allows developers to define their own metrics, as we'll do in this article.

    A simple Express.js app

    Let’s start by creating a simple Express.js application. In this application, a service endpoint at /api/greeting accepts GET requests and returns a greeting as JSON. The following commands will get your project started:

    $ mkdir my-app && cd my-app
    
    $ npm init -y
    
    $ npm i express body-parser prom-client
    

    This sequence of commands should create a package.json file and install all of the application dependencies. Next, open the package.json file in a text editor and add the following to the scripts section:

    "start": "node app.js"
    

    Application source code

    The following code is a fairly simple Express.js application. Create a new file in your text editor called app.js and paste the following into it:

    'use strict';
    const express = require('express');
    const bodyParser = require('body-parser');
    
    // Use the prom-client module to expose our metrics to Prometheus
    const client = require('prom-client');
    
    // enable prom-client to expose default application metrics
    const collectDefaultMetrics = client.collectDefaultMetrics;
    
    // define a custom prefix string for application metrics
    collectDefaultMetrics({ prefix: 'my_app:' });
    
    const histogram = new client.Histogram({
      name: 'http_request_duration_seconds',
      help: 'Duration of HTTP requests in seconds histogram',
      labelNames: ['method', 'handler', 'code'],
      buckets: [0.1, 5, 15, 50, 100, 500],
    });
    
    const app = express();
    const port = process.argv[2] || 8080;
    
    let failureCounter = 0;
    
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({ extended: true }));
    
    app.get('/api/greeting', async (req, res) => {
      const end = histogram.startTimer();
      const name = req.query?.name || 'World';
    
      try {
        const result = await somethingThatCouldFail(`Hello, ${name}`);
        res.send({ message: result });
      } catch (err) {
        res.status(500).send({ error: err.toString() });
      }
    
      res.on('finish', () =>
        end({
          method: req.method,
          handler: new URL(req.url, `http://${req.hostname}`).pathname,
          code: res.statusCode,
        })
      );
    });
    
    // expose our metrics at the default URL for Prometheus
    app.get('/metrics', async (req, res) => {
      res.set('Content-Type', client.register.contentType);
      res.send(await client.register.metrics());
    });
    
    app.listen(port, () => console.log(`Express app listening on port ${port}!`));
    
    function somethingThatCouldFail(echo) {
      if (Date.now() % 5 === 0) {
        return Promise.reject(`Random failure ${++failureCounter}`);
      } else {
        return Promise.resolve(echo);
      }
    }
    

    Deploy the application

    You can use the following command to deploy the application to Red Hat OpenShift:

    $ npx nodeshift --expose
    

    This command creates all the OpenShift objects that your application needs in order to be deployed. After the deployment succeeds, you will be able to visit your application.

    Verify the application

    This application exposes two endpoints: /api/greetings to get the greeting message and /metrics to get the Prometheus metrics. First, you'll see the JSON greeting produced by visiting the greetings URL:

    $ curl http://my-app-nodeshift.apps.ci-ln-5sqydqb-f76d1.origin-ci-int-gce.dev.openshift.com/api/greeting
    

    If everything goes well you'll get a successful response like this one:

    {"content":"Hello, World!"}
    

    Now, get your Prometheus application metrics using:

    $ curl ${your-openshift-application-url}/metrics
    

    You should be able to view output like what's shown in Figure 1.

    Prometheus metrics for a Node.js application.
    Figure 1: Topology view from the developer perspective of the OpenShift web console.

    Configuring Prometheus

    As of version 4.6, OpenShift comes with a built-in Prometheus instance. To use this instance, you will need to configure the monitoring stack and enable metrics for user-defined projects on your cluster, from an administrator account.

    Create a cluster monitoring config map

    To configure the core Red Hat OpenShift Container Platform monitoring components, you must create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project. Create a YAML file called cluster-monitoring-config.yaml and paste in the following:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        enableUserWorkload: true
    

    Then, apply the file to your OpenShift cluster:

    $ oc apply -f cluster-monitoring-config.yaml
    

    You also need to grant user permissions to configure monitoring for user-defined projects. Run the following command, replacing user and namespace with the appropriate values:

    $ oc policy add-role-to-user monitoring-edit user -n namespace
    

    Create a service monitor

    The last thing to do is deploy a service monitor for your application. Deploying the service monitor allows Prometheus to scrape your application's /metrics endpoint regularly to get the latest metrics. Create a file called service-monitor.yaml and paste in the following:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: nodeshift-monitor
      name: nodeshift-monitor
      namespace: nodeshift
    spec:
      endpoints:
        - interval: 30s
          port: http
          scheme: http
      selector:
        matchLabels:
          project: my-app
    

    Then, deploy this file to OpenShift:

    $ oc apply -f service-monitor.yaml
    

    The whole OpenShift monitoring stack should now be configured properly.

    The Prometheus dashboard

    With OpenShift 4.6, the Prometheus dashboard is integrated with OpenShift. To access the dashboard, go to your project and choose the Monitoring item on the left, as shown in Figure 2.

    CPU usage in the Prometheus dashboard.
    Figure 2: Prometheus monitoring in the OpenShift dashboard.

    To view the Prometheus metrics (using PromQL), go to the second tab called Metrics. You can query and graph any of the metrics your application provides. For example, Figure 3 graphs the size of the heap.

    A heap graph in the Prometheus dashboard.
    Figure 3: A heap graph in Prometheus.

    Testing the application

    Next, let's use the Apache Bench tool to add to the load on our application. We'll hit our API endpoint 10,000 times with 100 concurrent requests at a time:

    $ ab -n 10000 -c 100 http://my-app-nodeshift.apps.ci-ln-5sqydqb-f76d1.origin-ci-int-gce.dev.openshift.com/api/greeting
    

    After generating this load, we can go back to the main Prometheus dashboard screen and construct a simple query to see how the service performed. We'll use our custom http_request_duration_seconds metric to measure the average request duration during the last five minutes. Type this query into the textbox:

    rate(http_request_duration_seconds_sum[5m])/rate(http_request_duration_seconds_count[5m])

    Then, go to the Prometheus dashboard to see the nicely drawn graph shown in Figure 4.

    Performance monitoring with Prometheus.
    Figure 4: Results from a custom query.

    We get two lines of output because we have two types of responses: The successful one (200) and the server error (500). We can also see that as the load increases, so does the time required to complete HTTP requests.

    Conclusion

    This article has been a quick introduction to monitoring Node.js applications with Prometheus.  You’ll want to do much more for a production application, including setting up alerts and adding custom metrics to support RED metrics. But I’ll leave those options for another article. Hopefully, this was enough to get you started and ready to learn more.

    To learn more about what Red Hat is up to on the Node.js front, check out our new Node.js landing page.

    OSZAR »
    Last updated: September 27, 2024

    Recent Posts

    • How to integrate vLLM inference into your macOS and iOS apps

    • How Insights events enhance system life cycle management

    • Meet the Red Hat Node.js team at PowerUP 2025

    • How to use pipelines for AI/ML automation at the edge

    • What's new in network observability 1.8

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue

    OSZAR »