Skip to main content
Deploy the IBM i MCP Server on Red Hat OpenShift using Kustomize with source-to-image (S2I) builds. New image builds automatically trigger redeployment.

Overview

The OpenShift deployment uses:
  • Source-to-Image (S2I) builds from the GitHub repository
  • Kustomize for configuration management
  • ImageStreamTag triggers for automatic redeployment on new builds
  • OpenShift Routes for TLS-terminated external access
The manifests deploy the IBM i MCP Server alongside optional components including MCP Context Forge Gateway and agent infrastructure.

Prerequisites

1

OpenShift Cluster Access

You need access to an OpenShift cluster with permissions to create BuildConfigs, Deployments, Services, and Routes.
# Verify CLI access
oc whoami
oc project
2

Install Required Tools

3

Enable Internal Image Registry

The S2I builds require the OpenShift internal image registry. Follow the Red Hat documentation to enable it if not already active.
4

Prepare Environment Files

You will need .env files with IBM i credentials and server configuration for the deployment.

Deployment Steps

1

Clone the Repository

git clone https://github.com/IBM/ibmi-mcp-server.git
cd ibmi-mcp-server/deployment/openshift/apps/openshift
2

Prepare Configuration Files

Copy required configuration files into the deployment directories:
# Get env file for the MCP server
cp ../../../../.env.example ./ibmi-mcp-server/.env
# Edit with your IBM i credentials
vi ./ibmi-mcp-server/.env

# Copy SQL tools and secrets directories
cp -r ../../../../tools ./ibmi-mcp-server/
cp -r ../../../../secrets ./ibmi-mcp-server/
Edit the .env file with your actual IBM i credentials before deploying. The file should contain DB2i_HOST, DB2i_USER, DB2i_PASS, and other required variables.
3

Set Your Namespace

Update the root kustomization.yaml with your OpenShift namespace:Replace <NAMESPACE_PLACEHOLDER> with your actual namespace, then switch to it:
oc project your-namespace
4

Deploy with Kustomize

kustomize build . | oc apply -f -
This creates all required resources: BuildConfig, ImageStream, Deployment, Service, and Route.
5

Monitor the Build

Watch the S2I build progress:
oc logs -f bc/ibmi-mcp-server
The build clones the repository, runs the Dockerfile multi-stage build, and pushes the resulting image to the internal registry.
6

Verify Deployment

# Check pod status
oc get pods

# Get the external URL
echo "https://$(oc get route ibmi-mcp-server -o jsonpath='{.spec.host}')"

What Gets Deployed

The Kustomize manifests create the following OpenShift resources:

IBM i MCP Server

ResourcePurpose
BuildConfigS2I build from GitHub repository
ImageStreamTracks built images, triggers redeployment
DeploymentRuns the MCP server pod
ServiceInternal cluster networking (port 3010)
RouteExternal TLS-terminated access

Optional Components

The full Kustomize overlay can also deploy:
ComponentDescription
MCP Context ForgeGateway with tool federation, auth, admin UI
Agent UIWeb interface for interacting with agents
Agent OS APIBackend API for agent infrastructure
pgvectorPostgreSQL with vector extension for embeddings
Edit the root kustomization.yaml to select which components to deploy. You can deploy only the IBM i MCP Server if you don’t need the full agent infrastructure.

Triggering Rebuilds

Automatic

New image builds are triggered automatically when the ImageStreamTag is updated. Push a code change and start a build:
oc start-build ibmi-mcp-server

From Local Source

Build directly from your local working directory:
oc start-build ibmi-mcp-server --from-dir=.
This uploads your local source code to OpenShift for building, useful for testing changes before pushing to Git.

From Remote Repository

# Trigger a new build from the configured Git source
oc start-build ibmi-mcp-server

Manifest Structure

The deployment manifests are organized under deployment/openshift/apps/openshift/:
deployment/openshift/apps/openshift/
├── kustomization.yaml                    # Root Kustomize config
├── ibmi-mcp-server/
│   ├── kustomization.yaml
│   ├── ibmi-mcp-server-buildconfig.yaml  # S2I build configuration
│   ├── ibmi-mcp-server-imagestream.yaml  # Image tracking
│   ├── ibmi-mcp-server-deployment.yaml   # Pod specification
│   ├── ibmi-mcp-server-service.yaml      # Internal networking
│   └── ibmi-mcp-server-route.yaml        # External access
├── mcpgateway/                           # MCP Context Forge (optional)
└── ibmi-agent-infra/                     # Agent infrastructure (optional)

Troubleshooting

Check the build logs for detailed error information:
oc logs -f bc/ibmi-mcp-server
Common causes:
  • Missing Dockerfile in the repository
  • Node.js dependency installation failures
  • Insufficient build resources (memory/CPU limits)
Check the pod logs:
oc logs $(oc get pods -l app=ibmi-mcp-server -o name | head -1)
Verify environment variables are set correctly:
oc get deployment ibmi-mcp-server -o jsonpath='{.spec.template.spec.containers[0].env}'
Verify the route exists and has an assigned host:
oc get route ibmi-mcp-server
Check that the service is targeting the correct port:
oc get svc ibmi-mcp-server
If persistent volume claims are pending:
oc get pvc
oc describe pvc <pvc-name>
Ensure your cluster has available storage classes with sufficient capacity.
Verify that your OpenShift cluster has network access to the IBM i system. In restricted environments, you may need:
  • Firewall rules allowing egress to IBM i on port 8076
  • Network policies permitting the MCP server pod to reach external hosts
  • DNS resolution for the IBM i hostname