Saturday, May 21, 2022
HomeHealthChatOps: Managing Kubernetes Deployments in Webex

ChatOps: Managing Kubernetes Deployments in Webex


That is the third publish in a sequence about writing ChatOps companies on prime of the Webex API.  Within the first publish, we constructed a Webex Bot that obtained message occasions from a gaggle room and printed the occasion JSON out to the console.  In the second, we added safety to that Bot, including an encrypted authentication header to Webex occasions, and subsequently including a easy record of licensed customers to the occasion handler.  We additionally added consumer suggestions by posting messages again to the room the place the occasion was raised.

On this publish, we’ll construct on what was accomplished within the first two posts, and begin to apply real-world use instances to our Bot.  The objective right here will probably be to handle Deployments in a Kubernetes cluster utilizing instructions entered right into a Webex room.  Not solely is that this a enjoyable problem to resolve, but it surely additionally gives wider visibility into the goings-on of an ops workforce, as they will scale a Deployment or push out a brand new container model within the public view of a Webex room.  You will discover the finished code for this publish on GitHub.

This publish assumes that you just’ve accomplished the steps listed within the first two weblog posts.  You will discover the code from the second publish right here.  Additionally, essential, you should definitely learn the primary publish to learn to make your native growth setting publicly accessible in order that Webex Webhook occasions can attain your API.  Make sure that your tunnel is up and operating and Webhook occasions can circulation via to your API efficiently earlier than continuing on to the subsequent part.  On this case, I’ve arrange a brand new Bot known as Kubernetes Deployment Supervisor, however you should utilize your present Bot for those who like.  From right here on out, this publish assumes that you just’ve taken these steps and have a profitable end-to-end knowledge circulation.

Structure

Let’s check out what we’re going to construct:

Architecture Diagram

Constructing on prime of our present Bot, we’re going to create two new companies: MessageIngestion, and Kubernetes.  The latter will take a configuration object that provides it entry to our Kubernetes cluster and will probably be chargeable for sending requests to the K8s management airplane.  Our Index Router will proceed to behave as a controller, orchestrating knowledge flows between companies.  And our WebexNotification service that we constructed within the second publish will proceed to be chargeable for sending messages again to the consumer in Webex.

Our Kubernetes Assets

On this part, we’ll arrange a easy Deployment in Kubernetes, in addition to a Service Account that we will leverage to speak with the Kubernetes API utilizing the NodeJS SDK.  Be happy to skip this half if you have already got these sources created.

This part additionally assumes that you’ve got a Kubernetes cluster up and operating, and each you and your Bot have community entry to work together with its API.  There are many sources on-line for getting a Kubernetes cluster arrange, and getting kubetcl put in, each of that are past the scope of this weblog publish.

Our Take a look at Deployment

To maintain factor easy, I’m going to make use of Nginx as my deployment container – an easily-accessible picture that doesn’t have any dependencies to rise up and operating.  In case you have a Deployment of your individual that you just’d like to make use of as an alternative, be at liberty to exchange what I’ve listed right here with that.

# in sources/nginx-deployment.yaml
apiVersion: apps/v1
sort: Deployment
metadata:
    identify: nginx-deployment
  labels:
      app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
template:
  metadata:
    labels:
      app: nginx
  spec:
    containers:
    - identify: nginx
      picture: nginx:1.20
      ports:
      - containerPort: 80

Our Service Account and Function

The subsequent step is to verify our Bot code has a method of interacting with the Kubernetes API.  We will do this by making a Service Account (SA) that our Bot will assume as its identification when calling the Kubernetes API, and making certain it has correct entry with a Kubernetes Function.

First, let’s arrange an SA that may work together with the Kubernetes API:

# in sources/sa.yaml
apiVersion: v1
sort: ServiceAccount
metadata:
  identify: chatops-bot

Now we’ll create a Function in our Kubernetes cluster that can have entry to just about the whole lot within the default Namespace.  In a real-world software, you’ll doubtless wish to take a extra restrictive method, solely offering the permissions that enable your Bot to do what you propose; however wide-open entry will work for a easy demo:

# in sources/function.yaml
sort: Function
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  identify: chatops-admin
guidelines:
- apiGroups: ["*"]
  sources: ["*"]
  verbs: ["*"]

Lastly, we’ll bind the Function to our SA utilizing a RoleBinding useful resource:

# in sources/rb.yaml
sort: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  identify: chatops-admin-binding
  namespace: default
topics:
- sort: ServiceAccount
  identify: chatops-bot
  apiGroup: ""
roleRef:
  sort: Function
  identify: chatops-admin
  apiGroup: ""

Apply these utilizing kubectl:

$ kubectl apply -f sources/sa.yaml
$ kubectl apply -f sources/function.yaml
$ kubectl apply -f sources/rb.yaml

As soon as your SA is created, fetching its data will present you the identify of the Secret during which its Token is saved.

Screenshot of the Service Account's describe output

Fetching data about that Secret will print out the Token string within the console.  Watch out with this Token, because it’s your SA’s secret, used to entry the Kubernetes API!

The secret token value

Configuring the Kubernetes SDK

Since we’re writing a NodeJS Bot on this weblog publish, we’ll use the JavaScript Kubernetes SDK for calling our Kubernetes API.  You’ll discover, for those who take a look at the examples within the Readme, that the SDK expects to have the ability to pull from an area kubectl configuration file (which, for instance, is saved on a Mac at ~/.kube/config).  Whereas that may work for native growth, that’s not supreme for Twelve Issue growth, the place we sometimes go in our configurations as setting variables.  To get round this, we will go in a pair of configuration objects that mimic the contents of our native Kubernetes config file and might use these configuration objects to imagine the identification of our newly created service account.

Let’s add some setting variables to the AppConfig class that we created within the earlier publish:

// in config/AppConfig.js
// contained in the constructor block
// after earlier setting variables

// no matter you’d like to call this cluster, any string will do
this.clusterName = course of.env['CLUSTER_NAME'];
// the bottom URL of your cluster, the place the API will be reached
this.clusterUrl = course of.env['CLUSTER_URL'];
// the CA cert arrange on your cluster, if relevant
this.clusterCert = course of.env['CLUSTER_CERT'];
// the SA identify from above - chatops-bot
this.kubernetesUserame = course of.env['KUBERNETES_USERNAME'];
// the token worth referenced within the screenshot above
this.kubernetesToken = course of.env['KUBERNETES_TOKEN'];

// the remainder of the file is unchanged…

These 5 strains will enable us to go configuration values into our Kubernetes SDK, and configure an area shopper.  To do this, we’ll create a brand new service known as KubernetesService, which we’ll use to speak with our K8s cluster:

// in companies/kubernetes.js

import {KubeConfig, AppsV1Api, PatchUtils} from '@kubernetes/client-node';

export class KubernetesService {
    constructor(appConfig) {
        this.appClient = this._initAppClient(appConfig);
        this.requestOptions = { "headers": { "Content material-type": 
PatchUtils.PATCH_FORMAT_JSON_PATCH}};
    }

    _initAppClient(appConfig) { /* we’ll fill this in quickly */  }

    async takeAction(k8sCommand) { /* we’ll fill this in later */ }
}

This set of imports on the prime offers us the objects and strategies that we’ll want from the Kubernetes SDK to rise up and operating.  The requestOptions property set on this constructor will probably be used after we ship updates to the K8s API.

Now, let’s populate the contents of the _initAppClient technique in order that we will have an occasion of the SDK prepared to make use of in our class:

// contained in the KubernetesService class
_initAppClient(appConfig) {
    // constructing objects from the env vars we pulled in
    const cluster = {
        identify: appConfig.clusterName,
        server: appConfig.clusterUrl,
        caData: appConfig.clusterCert
    };
    const consumer = {
        identify: appConfig.kubernetesUserame,
        token: appConfig.kubernetesToken,
    };
    // create a brand new config manufacturing facility object
    const kc = new KubeConfig();
    // go in our cluster and consumer objects
    kc.loadFromClusterAndUser(cluster, consumer);
    // return the shopper created by the manufacturing facility object
    return kc.makeApiClient(AppsV1Api);
}

Easy sufficient.  At this level, we’ve a Kubernetes API shopper prepared to make use of, and saved in a category property in order that public strategies can leverage it of their inside logic.  Let’s transfer on to wiring this into our route handler.

Message Ingestion and Validation

In a earlier publish, we took a take a look at the complete payload of JSON that Webex sends to our Bot when a brand new message occasion is raised.  It’s value looking once more, since it will point out what we have to do in our subsequent step:

Message event body

For those who look via this JSON, you’ll discover that nowhere does it record the precise content material of the message that was despatched; it merely offers occasion knowledge.  Nevertheless, we will use the knowledge.id discipline to name the Webex API and fetch that content material, in order that we will take motion on it.  To take action, we’ll create a brand new service known as MessageIngestion, which will probably be chargeable for pulling in messages and validating their content material.

Fetching Message Content material

We’ll begin with a quite simple constructor that pulls within the AppConfig to construct out its properties, one easy technique that calls a few stubbed-out non-public strategies:

// in companies/MessageIngestion.js
export class MessageIngestion {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    async determineCommand(occasion) {
        const message = await this._fetchMessage(occasion);
        return this._interpret(message);
     }

    async _fetchMessage(occasion) { /* we’ll fill this in subsequent */ }

    _interpret(rawMessageText) { /* we’ll discuss this */ }
}

We’ve bought a superb begin, so now it’s time to put in writing our code for fetching the uncooked message textual content.  We’ll name the identical /messages endpoint that we used to create messages within the earlier weblog publish, however on this case, we’ll fetch a particular message by its ID:

// in companies/MessageIngestion.js
// contained in the MessageIngestion class

// discover we’re utilizing fetch, which requires NodeJS 17.5 or greater, and a runtime flag
// see earlier publish for more information
async _fetchMessage(occasion) {
    const res = await fetch("https://webexapis.com/v1/messages/" + 
occasion.knowledge.id, {
        headers: {
            "Content material-Kind": "software/json",
            "Authorization": `Bearer ${this.botToken}`
        },
        technique: "GET"
    });
    const messageData = await res.json();
    if(!messageData.textual content) {
        throw new Error("Couldn't fetch message content material.");
    }
    return messageData.textual content;
}

For those who console.log the messageData output from this fetch request, it should look one thing like this:

The messageData object

As you’ll be able to see, the message content material takes two types – first in plain textual content (identified with a purple arrow), and second in an HTML block.  For our functions, as you’ll be able to see from the code block above, we’ll use the plain textual content content material that doesn’t embody any formatting.

Message Evaluation and Validation

It is a complicated subject to say the least, and the complexities are past the scope of this weblog publish.  There are plenty of methods to research the content material of the message to find out consumer intent.  You could possibly discover pure language processing (NLP), for which Cisco presents an open-source Python library known as MindMeld.  Or you can leverage OTS software program like Amazon Lex.

In my code, I took the straightforward method of static string evaluation, with some inflexible guidelines across the anticipated format of the message, e.g.:

<tagged-bot-name> scale <name-of-deployment> to <number-of-instances>

It’s not probably the most user-friendly method, but it surely will get the job accomplished for a weblog publish.

I’ve two intents accessible in my codebase – scaling a Deployment and updating a Deployment with a brand new picture tag.  A swap assertion runs evaluation on the message textual content to find out which of the actions is meant, and a default case throws an error that will probably be dealt with within the index route handler.  Each have their very own validation logic, which provides as much as over sixty strains of string manipulation, so I received’t record all of it right here.  For those who’re considering studying via or leveraging my string manipulation code, it may be discovered on GitHub.

Evaluation Output

The glad path output of the _interpret technique is a brand new knowledge switch object (DTO) created in a brand new file:

// in dto/KubernetesCommand.js
export class KubernetesCommand {
    constructor(props = {}) {
        this.sort = props.sort;
        this.deploymentName = props.deploymentName;
        this.imageTag = props.imageTag;
        this.scaleTarget = props.scaleTarget;
    }
}

This standardizes the anticipated format of the evaluation output, which will be anticipated by the assorted command handlers that we’ll add to our Kubernetes service.

Sending Instructions to Kubernetes

For simplicity’s sake, we’ll concentrate on the scaling workflow as an alternative of the 2 I’ve bought coded.  Suffice it to say, that is on no account scratching the floor of what’s potential along with your Bot’s interactions with the Kubernetes API.

Making a Webex Notification DTO

The very first thing we’ll do is craft the shared DTO that can comprise the output of our Kubernetes command strategies.  This will probably be handed into the WebexNotification service that we inbuilt our final weblog publish and can standardize the anticipated fields for the strategies in that service.  It’s a quite simple class:

// in dto/Notification.js
export class Notification {
    constructor(props = {}) {
        this.success = props.success;
        this.message = props.message;
    }
}

That is the item we’ll construct after we return the outcomes of our interactions with the Kubernetes SDK.

Dealing with Instructions

Beforehand on this publish, we stubbed out the general public takeAction technique within the Kubernetes Service.  That is the place we’ll decide what motion is being requested, after which go it to inside non-public strategies.  Since we’re solely wanting on the scale method on this publish, we’ll have two paths on this implementation.  The code on GitHub has extra.

// in companies/Kuberetes.js
// contained in the KubernetesService class
async takeAction(k8sCommand) {
    let end result;
    swap (k8sCommand.sort) {
        case "scale":
            end result = await this._updateDeploymentScale(k8sCommand);
            break;
        default:
            throw new Error(`The motion sort ${k8sCommand.sort} that was 
decided by the system isn't supported.`);
    }
    return end result;
}

Very simple – if a acknowledged command sort is recognized (on this case, simply “scale”) an inside technique is named and the outcomes are returned.  If not, an error is thrown.

Implementing our inside _updateDeploymentScale technique requires little or no code.  Nevertheless it leverages the K8s SDK, which, to say the least, isn’t very intuitive.  The info payload that we create consists of an operation (op) that we’ll carry out on a Deployment configuration property (path), with a brand new worth (worth).  The SDK’s patchNamespacedDeployment technique is documented within the Typedocs linked from the SDK repo.  Right here’s my implementation:

// in companies/Kubernetes.js
// contained in the KubernetesService class
async _updateDeploymentScale(k8sCommand) {
    // craft a PATCH physique with an up to date reproduction depend
    const patch = [
        {
            "op": "replace",
            "path":"/spec/replicas",
            "value": k8sCommand.scaleTarget
        }
    ];
    // name the K8s API with a PATCH request
    const res = await 
this.appClient.patchNamespacedDeployment(k8sCommand.deploymentName, 
"default", patch, undefined, undefined, undefined, undefined, 
this.requestOptions);
    // validate response and return an success object to the
    return this._validateScaleResponse(k8sCommand, res.physique)
}

The tactic on the final line of that code block is chargeable for crafting our response output.

// in companies/Kubernetes.js
// contained in the KubernetesService class
_validateScaleResponse(k8sCommand, template) {
    if (template.spec.replicas === k8sCommand.scaleTarget) {
        return new Notification({
            success: true,
            message: `Efficiently scaled to ${k8sCommand.scaleTarget} 
cases on the ${k8sCommand.deploymentName} deployment`
        });
    } else {
        return new Notification({
            success: false,
            message: `The Kubernetes API returned a duplicate depend of 
${template.spec.replicas}, which doesn't match the specified 
${k8sCommand.scaleTarget}`
        });
    }
}

Updating the Webex Notification Service

We’re virtually on the finish!  We nonetheless have one service that must be up to date.  In our final weblog publish, we created a quite simple technique that despatched a message to the Webex room the place the Bot was known as, primarily based on a easy success or failure flag.  Now that we’ve constructed a extra complicated Bot, we want extra complicated consumer suggestions.

There are solely two strategies that we have to cowl right here.  They may simply be compacted into one, however I want to maintain them separate for granularity.

The general public technique that our route handler will name is sendNotification, which we’ll refactor as follows right here:

// in companies/WebexNotifications
// contained in the WebexNotifications class
// discover that we’re including the unique occasion
// and the Notification object
async sendNotification(occasion, notification) {
    let message = `<@personEmail:${occasion.knowledge.personEmail}>`;
    if (!notification.success) {
        message += ` Oh no! One thing went unsuitable! 
${notification.message}`;
    } else {
        message += ` Properly accomplished! ${notification.message}`;
    }
    const req = this._buildRequest(occasion, message); // a brand new non-public 
message, outlined beneath
    const res = await fetch(req);
    return res.json();
}

Lastly, we’ll construct the non-public _buildRequest technique, which returns a Request object that may be despatched to the fetch name within the technique above:

// in companies/WebexNotifications
// contained in the WebexNotifications class
_buildRequest(occasion, message) {
    return new Request("https://webexapis.com/v1/messages/", {
        headers: this._setHeaders(),
        technique: "POST",
        physique: JSON.stringify({
            roomId: occasion.knowledge.roomId,
            markdown: message
        })
    })
}

Tying The whole lot Collectively within the Route Handler

In earlier posts, we used easy route handler logic in routes/index.js that first logged out the occasion knowledge, after which went on to answer a Webex consumer relying on their entry.  We’ll now take a unique method, which is to wire in our companies.  We’ll begin with pulling within the companies we’ve created up to now, conserving in thoughts that it will all happen after the auth/authz middleware checks are run.  Right here is the complete code of the refactored route handler, with modifications going down within the import statements, initializations, and handler logic.

// revised routes/index.js
import specific from 'specific'
import {AppConfig} from '../config/AppConfig.js';
import {WebexNotifications} from '../companies/WebexNotifications.js';
// ADD OUR NEW SERVICES AND TYPES
import {MessageIngestion} from "../companies/MessageIngestion.js";
import {KubernetesService} from '../companies/Kubernetes.js';
import {Notification} from "../dto/Notification.js";

const router = specific.Router();
const config = new AppConfig();
const webex = new WebexNotifications(config);
// INSTANIATE THE NEW SERVICES
const ingestion = new MessageIngestion(config);
const k8s = new KubernetesService(config);

// Our refactored route handler
router.publish('/', async perform(req, res) {
  const occasion = req.physique;
  attempt {
    // message ingestion and evaluation
    const command = await ingestion.determineCommand(occasion);
    // taking motion primarily based on the command, presently stubbed-out
    const notification = await k8s.takeAction(command);
    // reply to the consumer 
    const wbxOutput = await webex.sendNotification(occasion, notification);
    res.statusCode = 200;
    res.ship(wbxOutput);
  } catch (e) {
    // reply to the consumer
    await webex.sendNotification(occasion, new Notification({success: false, 
message: e}));
    res.statusCode = 500;
    res.finish('One thing went terribly unsuitable!');
  }
}
export default router;

Testing It Out!

In case your service is publicly accessible, or if it’s operating domestically and your tunnel is exposing it to the web, go forward and ship a message to your Bot to try it out.  Do not forget that our take a look at Deployment was known as nginx-deployment, and we began with two cases.  Let’s scale to 3:

Successful scale to 3 instances

That takes care of the glad path.  Now let’s see what occurs if our command fails validation:

Failing validation

Success!  From right here, the chances are limitless.  Be happy to share your whole experiences leveraging ChatOps for managing your Kubernetes deployments within the feedback part beneath.

Observe Cisco Studying & Certifications

Twitter, Fb, LinkedIn and Instagram.

Share:



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments