Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Experimenting with Email generation and summarization with Node.js and Large Language Models

November 8, 2024
Lucas Holmquist
Related topics:
Artificial intelligenceNode.js
Related products:
Red Hat build of Node.js

Share:

    Welcome back to this ongoing series of posts about using Large Language Models(LLMs) with Node.js.  In the previous post we took a look at creating and using a LLM chat bot with Node.js.  We showed it in the context of a fictitious claims management for an insurance company.  

    This next post will take a look at adding a feature that will help generate an email summarization as well as making sure the data returned from our LLM is structured as JSON.

    Summarization Prompting

    As a small refresher, the example application was a mock claims management system that an insurance company might use to service their claims.  The previous post added a chatbot where the claims adjuster could ask it questions about the information in the claims summaries.

    This post will add a new feature that will direct our LLM to write an email response that they could then send to the insured regarding their claim.  

    Instructions for running this part of the example is in its own feature branch here

    The UI has a new side tab called Email Generate and is just a TextArea to input the original customer correspondence.  

    Parasol Insurance Application Email Generator

    When that is submitted, a new prompt that we will take a look at in a second,  will send that to our LLM to produce a JSON response that we can send back to the front-end.  The front-end expects that our JSON object has both a subject and message property.  The final response on the UI could look something like this:

     

    Parasol Insurance Application Email Generator Result

     

    The prompt that we use to generate the response is fairly simple, so let’s take a look at what that:

     

    const prompt = ChatPromptTemplate.fromTemplate(`
          You are a helpful, respectful and honest assistant named Parasol Assistant.
          You work for Parasol Insurance
          Formatting Instructions: {format_instructions}
          User Input: {input}
      `);

    The prompt is pretty simple.  In this example we are using the basic fromTemplate method(Last post we used a little more advanced method).  The first two lines are also very simple, instructing the LLM that it should think of itself as someone who works for our company.

    The next line we will cover in the next section, but this where we will tell our LLM that we want a more structured response, like JSON.

    The final line is a placeholder for the input coming from the form on the UI

    Formatting Instructions

    In the chatbot example, you might remember that our response was accessed through the content parameter of an object that langchain.js creates.  While this might be ok to use, we can actually instruct the LLM to respond in a more structured way.  

    Langchain.js provides a few different output parsers for developers to use.  This is nice, because maybe in your own application the AI portion might need to pass its response off to another service, like a REST endpoint and having the LLM format the response in that services format can reduce code.

    The object format that our front-end expects is a JSON object with the two properties, subject and message.  Something like this:

    {
      subject:'',
      message: ''
    }

    For this, we use the langchain.js class StructedOutputParser.  It contains two methods for defining our object schema, and for this example we are using the one that allows our definition to be more advanced.  This is the fromZodSchema

    Just as a quick aside, Zod, while being a Superman villain, is also a popular TypeScript-first schema declaration and validation library.   

    Below is the code that is used to define our objects schema:

    // Setup the output parser using the Zod Schema
    
    const outputParser = StructuredOutputParser.fromZodSchema(
        z.object({
          subject: z.string().describe('Subject of your response, suitable to use as an email subject line.'),
          message: z.string().describe('Response text that summarizes the information they gave, and asks for any other missing information needed from Parasol.')
        })
      );

    Here we have our two properties and we describe what type they should be, in this case strings, and we also describe what they are.  For example, we define our message property saying that it should be some sort of response text that summarizes the information they gave, and asks for any other missing information needed from Parasol.  I’d like to call out the ask for any other missing information line.  In our screenshot earlier showing the result, our response was asking for a number of items, so that part worked!

    Piecing it Together

    It is now time to piece everything together.  Similar to our chatbot, we create a chain and the only difference this time is that we add another pipe method to the end that includes the output parser that was just created:

    const chain = prompt.pipe(model).pipe(outputParser);

    Finally we can invoke this chain to get the response.  In the prompt we left placeholders for both the user's input as well as the formatting instructions, which needs to be passed to the chain.  This looks something like this:

     

    return await chain.invoke({
        input: userResponse,
        format_instructions: outputParser.getFormatInstructions()
      });

     

    In this example, the invoke method is used to get the response instead of the streaming method that was used in the chatbot example in the previous post.  Once this is invoked, the LLM should provide a result similar to this console.log that I did:

    {
      subject: 'Claim Regarding Vehicle Accident - Dominic Toretto',
      message: 'Dear Mr. Toretto,\n' +
        '\n' +
        'Thank you for reaching out to Parasol Insurance regarding your recent vehicle accident. We appreciate your detailed account of the circumstances surrounding the incident.\n' +
        '\n' +
        'To expedite your claim process, please provide us with the following information:\n' +
        '\n' +
        '1. Date and time of the accident\n' +
        '2. Police report number (if any)\n' +
        '3. Contact information for witnesses (if any)\n' +
        '4. Photos or videos of the damaged vehicle\n' +
        '5. Your policy number and any other relevant policy details\n' +
        '\n' +
        'Once we have this information, we can proceed with processing your claim promptly. If you have any questions or concerns, please do not hesitate to contact us.\n' +
        '\n' +
        'Looking forward to working with you.\n' +
        '\n' +
        'Best regards,\n' +
        'Parasol Insurance Team'
    }
    

    This is then passed back to the UI and properly formatted on screen.

    Conclusion

    While this post was a bit shorter than the first post, it does show how simple returning structured output from the LLM can be.  

    The next post in the series will take a look at how to give your LLM more little more context when answering questions using the Retrieval Augmented Generation (RAG) concept.

    As always if you want to learn more about what the Red Hat Node.js team is up to check these out:

    https://developers.redhat.com/topics/nodejs

    https://developers.redhat.com/topics/nodejs/ai

    https://github.com/nodeshift/nodejs-reference-architecture

    https://developers.redhat.com/e-books/developers-guide-nodejs-reference-architecture

    OSZAR »
    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Recent Posts

    • LLM Compressor: Optimize LLMs for low-latency deployments

    • How to set up NVIDIA NIM on Red Hat OpenShift AI

    • Leveraging Ansible Event-Driven Automation for Automatic CPU Scaling in OpenShift Virtualization

    • Python packaging for RHEL 9 & 10 using pyproject RPM macros

    • Kafka Monthly Digest: April 2025

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue

    OSZAR »