vRA 7.0 and Service Now Integration

With the release of vRealize Automation 7.0 comes some fantastic goodies including the new Event Broker system for providing integration and extensibility. Although I thought our previous methods with vRA 6.x and vRO were great, Event Broker just makes life so much easier. Let me explain….

You no longer have to write monster wrapper workflows for a given lifecycle stub. If you wanted to integrate with 3 or 4 items from a single stub with vRA 6.x, you would have written a master workflow called e.g. MachineProvisioned, and inside that workflow you would have called several sub workflows such as A/D, Infoblox, ServiceNow etc. Something like this:

oldvCOworkflow

 

Now with Event Broker, you can run an unlimited (please don’t do a that) number of workflows at any provisioning phase you like, and you can do so in any order using priorities and subscriptions. It’s incrediable.

In this article, I’m going to walk you through a classic use case for doing CMDB inserts and updates with vRA 7 and vRO 7 using the new Event Broker system.

Let’s begin with the vRO (Orchestrator) Configuration setup.

First you will want to grab my updated vRO package here:

com.vmware.set.national.snow.VRA7

You should already be familiar with how to import a package. Once the package is imported, you should see a SNOW folder with the updated workflows:

packageimport

Once that is completed, you must now add the SOAP host for your unique instance of ServiceNow and the CMDB table you are using. For example, I’m using the most common one like this:

https://YourSNOWIP/cmdb_ci_vmware_instance.do?WSDL

Run the workflow for “Add a SOAP host” and provide the WSDL, proxy, and login information/type:

addSOAPHost

It will probably ask you to import your ServiceNow certificate, so obviously do so and choose yes.

Once that is completed, you should be able to browse your SOAP host from the inventory tab in vRO. If you can see the methods you are golden:

soapMethods

Now there is one final step in vRO. You must update the Configuration Item for your SOAP host that you just added. Do this from the configuration tab under Design mode. Under Attributes choose your SOAP host and save and close:

soapConfigItem

 

Now we will Configure vRA and the Event Broker

We will first setup a Property Group (Build Profile). Yes, it’s true, some things have moved around and changed names in vRA 7.

Login to vRA 7 with appropriate permissions and head over to the Administration tab and choose Property Dictionary.

Let’s create a new Property Group and we will add a 2 custom properties called:

Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine

Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.Disposing

Make the value * for each.

propertyGroup

 

Yes I know, things have changed from 6.x. It’s good, I mean it’s really good. These properties allow vRA to send all the custom properties (specific ones) to vRO for BuildingMachine and Disposing. You now have options to choose specific properties and not dump everything. You also have a ton more options instead of the old buildingMachine and MachineProvisioned stubs. I’ll get a post out soon about basically extensibility concepts.

Don’t forget to add your Property Group to your blueprint. Head over to the Design tab (SAY WHAT???)  to edit your blueprint and add the Property Group to it.

Now it’s time for the fun stuff. Let’s setup the Event Broker subscriptions!!!

Ok, head back to the Administration tab and choose Events and Subscriptions. We are going to add 2 subscriptions, 1 for adding a CI and one for updating that CI in our CMDB.

For Event Topic choose Machine Provisioning and hit Next.

Select Run Based on Conditions

Hit the drop-down and select All of the following and then hit the + twice to add 2 more expressions. We are setting up 3 conditions for our subscription to run:

subscription-1

For each expression setup the following:

  1. Data/Lifecycle State/Lifecycle State name/Equals/Constant/VMPSMasterWorkflow32.BuildingMachine
  2. Data/Lifecycle State/State phase/Equals/Constant/PRE
  3. Data/Machine/Machine type/Equals/Constant/Virtual Machine

subscription-2

Click next and browse to the workflow for:

master_cmdb_insert_vRA7

subscription-3

Under Details, you must select Blocking (this allows the sys_id to be written back to the vRA vm as a custom property), and enter a timeout value. I’m using 5 min and I left priority at 10 (the default).

subscription-4

Now make sure you select the Subscription and choose publish!!!

Ok, now we need to setup a subscription for the CMDB Update when we destroy a machine.

Add a new subscription and setup exactly as the one we just built except for 2 changes in bold below:

  1. Data/Lifecycle State/Lifecycle State name/Equals/Constant/VMPSMasterWorkflow32.Disposing
  2. Data/Lifecycle State/State phase/Equals/Constant/POST
  3. Data/Machine/Machine type/Equals/Constant/Virtual Machine

Click next and select the update (not insert) master workflow:

master_cmdb_update_vRA7

subscription-5

You do NOT need to choose blocking or select a timeout. This is because we are not updating any custom properties, we are simply updating the CI directly in ServiceNow.

Provision a machine from vRA. Check to make sure there is a new custom property in the machine details when completed called sys_id.

Make sure there is a new CI in your ServiceNow CMDB for this vm.

Destroy the machine in vRA and make sure the CI is marked as retired in ServiceNow.

Note that you can unsubscribe Events in vRA to enable/disable integrations now. You can run as many worflows as you like using subscriptions.

subscription-6

Enjoy. Let me know if you have questions.

You May Also Like

About the Author: fetachz

25 Comments

  1. Hello,

    What about unprovisionned phase to delete the CI? How can we do it ?

    My customers also asked me how to integrate deeply with SNOW meaning take care about all workflows required, creating CIs for the complete design not only the compute and create relationships between components and also update each CI as you did but after that the complete component has been provision en.

    In order to do this we need much more for lifecycle actions workflows on each component, how to deal with this today?

    1. Sure, you can delete the CI instead of marking it as expired as I did. If you see in my screenshot, once you add your SOAP host, all of the methods are exposed, including deleteRecord. You will need to modify the workflow, or write your own that performs this method, instead of marking as expired.

      With respect to your second comment, sure, all of that can be done, including opening change cases and setting up relationships between various CIs and change tickets as well. I’ve done that for other companies and you can make it as simple or as complex as you like. You must have some experience with both ServiceNow and vRO, but shouldn’t be too difficult.

  2. For some reason we are getting an error with the VCAC host display name being blank. We saw this in VCAC 6.2 but cannot remember the fix:

    tblab12: sendEBSMessage15(workflow=1aa79abe-b9d6-4652-8a74-81d187a43a66) Error in state VMPSMasterWorkflow32.BuildingMachine phase PRE event (queue = 14379143-0d6d-4711-8509-956d73103316):
    Extensibility consumer error(10010) – TypeError: Cannot read property “displayName” from null (Workflow:master_cmdb_insert / Display inputs (item1)#10198)

    1. Hmm, are you able to browse your VCAC host from the inventory view in vRO? Do you see entities such as catalog, reservations, when you browse fro vRO? In other words, is the connection setup properly between vRA and vRO?

  3. Have you tried to provision more than one VM per blueprint? We are seeing an issue when deploying a multi-machine blueprint on decommissioning. When we decommission only one of the 2 VMs is marked as “retired” the other remains with a status of “On”.

    A second note, when we test and uncheck the “property group” from the blueprint, it still halfway tries to call the workflow despite that we unchecked it and it fails with errors.
    Wondering why when we uncheck the property group, does it still try to call those properties.

  4. The property group does NOT trigger the worfklow to run. The property group only provides the custom property values when the workflow runs from Event Broker via the subscription. In 7.x, this is a big change. Workflows are triggered via an Event Subscription, so in your case, yes, the workflow will still run, even if you uncheck the property group. And of course, it will fail, because the custom properties will not pass in all the values since it is unchecked from the blueprint.

    Regarding multiple machines, no, I haven’t tested this yet. I’ll try to do it this week.

  5. Ill troubleshoot the multimachine blueprint and get back to you once I get back into VCO.
    As per the Event Subscription, I understand now. Thank you. Just need to figure out how to “uncheck” like we used to do with build profiles. I need to “unsubscribe” a few test workloads.

    1. Yeah, np. Yes, the new “uncheck” is simply unsubscribe in Events. The main difference is now that affects many more things and not just a single blueprint.

      By the way, a peer of mine had the same issue yesterday with multiple machines. His workaround , was to setup a dependency in the blueprint between the two and it worked! Might not be ideal, but a good workaround at the moment.

  6. So we found a workaround to the dependency issue listed above.
    We added a clause to test the application of the workflows but adding a condition for “Blueprint Name Contains SNOW”.

    We then changed the Master Update to have
    Run on all conditions:
    State = Deprovision
    Phase = Post
    Blueprint name contains “SNOW”
    And Any of the following:
    MachineType = Virtual Machine
    MachineType = Multi-Machine Service

    Since the workloads now work in parallel rather than serial the provisioning and deprovisioing time is cut in half.

    Now this is test only. Potentially on our multimachine BPs we would want dependencies. We are just working our way around the process. Also we would want the CMDB update to apply to all workloads, but our condition for “Blueprint names Contains SNOW” allows for us to test and figure out how the application of properties changed from build profiles in 6.2.

    This test allows us to apply workflows based on BP name and not across the board.

  7. Any experiences using ServiceNow as a front end catalog for vRA provisioning. I’ve only seen two blogs stating how to do this (ServiceNow calling orchestrator workflows that use the vRA plugin). My question is how does this change the design of vRA (other than eliminating approvals and using SNOW for the approvals)

    1. Hi Dave. Yes, this does come up from time to time. You have 2 ways to accomplish this. The first, as you mentioned, is having SNOW send a REST call into vRO, which in turn invokes a catalog request to vRA via the vRA/vRO plugin. I have found that with versions 6.x of vRA, this was the easiest approach, as you did’t have to have an in-depth understanding of the vRA API, as vRO would handle that for you. The vRA API has improved in 7.0, so now, it is probably worth exploring having SNOW make the REST calls directly to vRA and bypass vRO all-together. That being said, I haven’t gone through this exercise yet.

      I agree with your statement. Approvals would typically go through SNOW in this case. One other thing to consider are day 2 actions. Where would these be triggered? If you want these to occur in SNOW, then you have to ensure the actions are written in such a way as to trigger them via the REST API in VRA. You wouldn’t want to handle these out of band by going directly to vCenter, as that may cause the systems to become out of sync. Also, you have to consider custom Day 2 actions. These would have to be re-written if triggered via SNOW, and that might be a lot of work.

      This use case isn’t always black and white in my opinion, but I have seen both used in many organizations. Some prefer to have all the infrastructure requests/management happen in their CMP (vRA in this case), and others want the one-stop shopping ITIL approach via ServiceNow.

      One thing I can tell you is that we are exploring making SNOW/VRA an OOTB integration in a future release. It’s all conceptual at this point, but more to come.

      Thanks

      1. Thanks for your in-depth response. I’m mostly interested in the differences between the design where vRA is the portal (what I’m used to deploying), vs SNOW being the front end and having vRA just do the provisioning (and Day2 actions). I’m sure Day2 would come via the same request method (SNOW). We’ll look into the vRA API.

        I’m assuming it’s going to be relatively simple/straight-forward as all of the pre-processing/approvals are done already, and vRA just has to figure out what reservations/resources are available to provision into. Am I off-basis, or is there anything else to be concerned with?

        1. Ah, I think I understand your question now. Are you doing the approvals in SNOW? If so, then once it is approved, you simply formulate the REST message and send that via the API to vRA. You would NOT have approvals turned on in vRA, so it’s simply taking the request and processing it. They key is that the user who is making the REST call must have an appropriate entitlement to the catalog item in order for the request to be processed. Are you planning to use a single service account that makes all the request, or, do you plan to make the request on behalf of the user in SNOW? Both would probably work, but in the end, you need to consider how users will interact with their workflows after provisioning. If it’s done through SNOW only (as the de-facto front end) then it probably is easier to use a service account.

          Also, to your point, once the request comes into vRA, it must respect all of the rules that are setup in terms of reservations and capacity limits. If a reservation is full on the vRA side, it will of course fail, but also, you must consider your blueprint min/max values as well. If you have 4vCPU max on a blueprint, and the request comes in for 8, it will of course fail.

    1. Hello. I’m not sure what you are asking. The Master CMDB Insert and Update workflows automatically bring in the custom properties from vRA and that populates the payload, which in turn does a CMDB insert.

      If you want to run a test, then run the “cmdb_insert” or “cmdb_update” workflows just above the masters. Those provide text fields for ip_address, CPU etc. and you can simply fill in some values to test your integration.

      Let me know if this make sense.

  8. Hello,

    Thanks for the comments.

    Currently I am having some issue with creating the Property Group in vRA 7.x via some workflow in vRO 7.x. I would like to design a workflow in vRO and from that workflow I need to create a Property Group in vRA and also need to assign that Property Group with some existing composite BluePrint in vRA. I can do this thing by manually from vRA but I am looking for some API/actions so that I can do the same thing from vRO workflow.

    Basically here is my use case:

    I need to request a catalog item and once the catalog item is requested and after machine got provisioned , the vRA can trigger my Custom Workflow (in vRO side). I followed the below link, but it is not working , since I am using vRA7.0.

    http://xtravirt.com/using-vmware-automation-to-address-a-virtual-machine-provisioning-challenge/blog

    Thanks,
    Koushik

    1. Yep, I was actually on that team at VMware, helping to scope/build that integration. My blog was before the official application was released. Thanks

Leave a Reply