How does

Applied AI

Use AI?

Automatically generates complex business workflows.

Project Overview

Deployed a 671B parameter LLM and knowledge graph to generate optimized business process workflows for BPO applications, achieving sub-5 second response times.

Layman's Explanation

Imagine a brilliant project manager who has memorized every single process your company uses. You describe a complex goal, and in seconds, they hand you a complete, optimized, step-by-step plan to achieve it.

Details

Applied AI developed a "Large Work Model" (LWM) to automate complex business process outsourcing (BPO) workflows, but lacked the MLOps infrastructure to deploy it. They partnered with Revela to productionize their research, centered on a massive 671 billion parameter DeepSeek R1 model. The goal was to generate entire, optimized business workflows from a simple user prompt in under five seconds.

The solution involved a multi-stage pipeline. First, a user query is processed to determine intent. Then, fine-tuned BGE-EN-ICL embedding models generate context-aware vector representations for four distinct semantic roles: workflow intention, input, output, and process. These embeddings are used to perform semantic search against a "Work Knowledge Graph," which stores domain-specific procedural knowledge. This graph was implemented using a hybrid strategy of Neo4j for persistent storage and Memgraph for high-speed, in-memory queries.

The retrieved graph context is then fed to the fine-tuned DeepSeek R1 model, which generates a structured workflow. An optimization service refines this workflow for cost and time. The entire system was deployed on a secure, air-gapped, multi-cluster Kubernetes architecture using high-performance H100 GPUs for training and inference. This infrastructure supports 50-100 inferences per pipeline run while meeting the strict sub-5 second latency requirement, successfully transforming a theoretical AI concept into a scalable, enterprise-grade product.

Analogy

It's like a GPS for business tasks. Instead of just finding the fastest route from A to B, it designs the entire road trip, including booking hotels, planning scenic stops, and optimizing for gas mileage, all based on your simple request to 'go to the beach'.

Other Machine Learning Techniques Used

  • Natural Language Processing; Used for intent solicitation from user queries and for the BGE embedding models to understand semantic roles.
  • Transfer Learning; The 671B parameter DeepSeek R1 model was fine-tuned on proprietary data, and the BGE models were fine-tuned for specific semantic tasks.
  • Reinforcement Learning; The underlying DeepSeek R1 model was trained using reinforcement learning (GRPO) to develop its advanced reasoning capabilities.
  • Embedding-based Retrieval; Fine-tuned BGE models converted queries into vectors for semantic search and retrieval of components from the knowledge graph.
  • Graph-based Machine Learning; A knowledge graph was used to store structured procedural knowledge, enabling multi-hop reasoning and context grounding for the LLM.
  • More Machine Learning Use Cases in

    Technology

    5

    /5

    Novelty Justification

    The project is at the frontier of applied AI, successfully productionizing a state-of-the-art 671B parameter model with a complex knowledge graph architecture to meet strict, real-time enterprise latency requirements

    Project Estimates

    Get New Use Cases Directly to Your Inbox

    Thank you! Your submission has been received!
    Oops! Something went wrong while submitting the form.