How to Build an AI Advisor for Industrial Operations Using XMPro
🌟 Unlock the Full Potential of AI Advisors for Industrial Operations with XMPro
Welcome to this comprehensive webinar on configuring AI advisors using XMPro’s innovative data stream technology. As industries pivot towards more automated and intelligent systems, understanding how to effectively implement AI advisors is crucial for staying competitive. This session provides a detailed walkthrough of the XMPro platform, demonstrating the power of AI in enhancing strategic decision-making.
🔍 What You Will Learn:
AI Assistants vs. AI Advisors: Explore the distinction and advantages of AI assistants and advisors within your operational workflows.
Configuring AI Advisors with XMPro: Learn the step-by-step process of integrating advanced AI capabilities into your operations using XMPro's robust framework.
Boosting Strategic Insights: Discover how AI advisors offer more than just efficiency—they provide complex, high-level decision support that transforms how you approach business challenges.
🎯 Who Should Watch:
Industry Leaders: Operational Managers, CIOs, and CEOs looking to integrate AI to streamline decision processes and enhance operational efficiencies.
Technical Experts: IT Professionals, System Integrators, and Technologists interested in the specifics of AI implementation in industrial settings.
Innovators and Thought Leaders: Individuals driving digital transformation and interested in leveraging cutting-edge AI to solve real-world problems.
💡 Key Takeaways:
Actionable Insights: Gain a clear understanding of how to apply AI within XMPro to drive business outcomes.
Future of Operations: Learn about the shift towards decision automation and how XMPro supports this evolution with cutting-edge AI solutions.
Enhanced Decision-Making: See how XMPro’s AI advisors go beyond simple task execution to provide strategic, data-driven insights that are critical in today’s fast-paced market.
🚀 Join the AI Revolution in Industrial Operations
Embrace the future of operations with XMPro’s AI advisors. Whether you're looking to enhance operational efficiency, streamline decision-making processes, or adopt a more strategic approach to business challenges, this webinar is your gateway to understanding and applying generative AI in a meaningful way.
🔗 Stay Ahead of the Curve:
Subscribe to our channel for more insightful webinars and tutorials. Visit our website to explore more about XMPro and how our solutions are revolutionizing industries.
Connect with us on social media to join the conversation about the future of AI in industrial operations.
👍 Like, Subscribe, and Comment Below:
What challenges do you face in implementing AI in your operations? How can XMPro help you overcome these challenges? Share your thoughts and questions in the comments!
#XMPro #AIIntegration #GenerativeAI #DigitalTransformation #IndustrialAI #OperationalExcellence #StrategicDecisionMaking #AIWebinar
Transcript
good morning everybody and thank you for
joining me today so today what we're
going to do um in this webinar is run
through configuring an AI advisor using
data streams inside
XMR before we do that um and before we
jump in it'll be good for us just to get
an understanding of the the different
terminology that I'll be I'll be talking
to so what's the difference between a
assistant and a advisor those are the
key things that I'm looking to to run
through
here to do that um I'm going to break it
down from an aspect perspective we're
going to touch on assistant and advisor
uh side of things um as
well if we focus on the the aspect
pieces so breaking it down helps us to
compare the two of these things a little
bit easier so what is the focus of an
assist what's the focus of an advisor
what's their role approach some of the
benefits problem solving how do you
interact from a user perspective as well
as some of the value proposition uh
pieces as well so when we're talking
about an assistant its focus is
generally task and and operations um its
decision support capabilities are
immediate or
practical its expertise level is Broad
and general knowledge um so user
interactions generally direct or command
based um and and its main value
proposition why do you want an assistant
is efficiency and
convenience when we transition this to
an
advisor you'll notice that some of these
aspects start changing so again what's
the focus of an advisor is to give you
expertise and some strategy some of the
decision support there is high level and
complex one of the key benefits of a
advisor is strategic insights so we're
moving away from just increasing
productivity from an assistant to the
more insight side of things with an
advisor um the user interaction is a lot
more consultative so it's not so direct
or command based um it's more
interactive and you'll see the value
proposition right at the bottom there
when we start talking around an advisor
we're really talking around informed
decision making so an assistant is there
for efficiency and convenience uh when
we touch on an advisor we're looking a
lot more for informed decision making so
keep these in the back of the Mind as we
run through and and configure and look
at the different pieces of an AI visor
inside a and
BR if I break this down into so where
does this work with regards to uh what
we do at exent pop and there's
essentially three main areas of
decisions that can get made so you
generally start with decision support
that's your dashboards your your V by
condition monitoring typically Falls in
here if we looking at decision
augmentation this is where you are
reaching out and you're looking for
prescriptive recommendations augmented
information from an AI perspective or
from other systems uh Clos Loop feedback
Etc the last piece is around decision
automation where you have a human who is
not so much in the loop but on the loop
uh and keeping an eye on the automation
pieces um that are happening here the
current shift everyone is moving towards
decision
augmentation um and we see the future
shift moving towards decision automation
as
well so we're generally evolving and
moving from an informant to a performant
state from an XM Pro perspective we
cover the spectrum of all three so we go
from decision support to augmentation
all the way to decision automation um as
well what we're going to focus on today
though is we're just going to focus on
the decision augmentation piece this
piece in the middle over here um and and
how do we start bringing some of this AI
capability more specifically the AI
advisor pieces into this as
well how we see the the the AI and and
how it's working inside the operational
landscape is it's changing the landscape
but it's also helping um and move so
what do I mean by that so currently at
the moment uh you have automation so
deterministic workflow tasks you've also
got AI assistance so with the um chat
GPT um that's happened over the last
year and continuing into this year
everyone's very familiar with having a
conversation getting a response and
doing something that with that response
so AI assistant more freestyle chat side
of side of things as well um that
eventually if you remember the prior
visual that I had when we started
getting towards the more uh autonomous
on the far right you start moving into
the area of generative agents so how do
I have a goal-based single objective um
agent that will in turn move into how do
I have uh group goals so how do I have
an agent that looks off to specific
things and then how do I have a
supervisor role that actually helps
direct them from an agent perspective
and then the last piece is how do we get
to autonomous goal seing so
self-organizing um agents who don't
necessarily need a directed supervisory
role but can self-organize amongst
themselves and work out what they should
be doing and how they should be uh
interacting with the different systems
as
well you'll see on the far left here as
you increase your um automation
capabilities and your AI capabilities as
well you go up you start going into and
touching different areas of the outcome
pieces on the right here so when you
start looking at your task based genx
all the way up to uh self creating at
the
top how do we translate this into uh
what it is it that we do for industrial
operations so you'll see at the bottom I
still got the same items uh automation
assistant um I'll come to the advisor
agents and then multi-agent pieces as
well so I'll just build this up for it
so these are the different areas um that
you can actually interact with um inside
XM
Pro generic algorithms bring your own
models if you want to go towards the
more what we would call traditional AI
uh machine learning uh models and
algorithms um if you want to use some of
the notebooks we have done a prior
webinar on this as well so creating an
endtoend example
um using the accent Pro notebook so how
do I go from the one area all the way
through to to the other area um and make
use of the notebooks to to be able to do
that so that has been covered in a prior
webinar I encourage you to to go and
watch it if that is of Interest how do
you start bringing in some of the large
language pieces down here and this is
where we start transitioning into the AI
advisor pieces versus just an AI
assistant some of the other areas um
when we start talking around Ai and we
start talking around advisor is an
assistant the next thing that comes in
that is trying to do obviously rag or
retrieval augmented uh generation on top
of that so how can the advisors be a lot
smarter how can they be trained as
experts in what they are um to be able
to advise you correctly a key thing here
is also bringing your own llm model it's
one thing to be able to talk and go to a
chat GPT or anthropics clae but how can
you actually control the model that you
want to use and then on the far right
you'll see some of the agentic stuff um
we're not going to go into that as well
to today but just to give you a full
picture of all the different bits and
pieces from an exm Pro that you can
actually utilize uh for for your AI
Journey around industrial
operations before we jump into the
advisor if I just touch the AI assistant
so from an AI assistant perspective
there are a few different pieces in XM
Pro where you can use an assistant
the first is around
recommendations So currently the AI
assistant that you see here is in beta
um where you can have a discussion and
conversation with the assistant around
the
recommendations and what that will allow
you to do is to get some insights into
into the data as you triggered it um you
can ask it various questions it can
group sum it up um it can provide you
information from a graph perspective
that's tabular there's a lot of
different options in there but it is
just an assistant uh it just assists you
to find some of the information that
we're actually looking for the second
area where you can do that is inside
data streams now data streams are what
sits underneath everything that you're
looking at here there are data streams
behind the recommendations um that you
saw before so the AI assistant has data
stream sitting behind
it the data stream that you look at here
is using a open AI assistant agent now
what the open AI assistant agent does is
it allows you to interact and actually
talk to an assistant that you have
defined inside um open ai's uh
playground again what you can do there
is you can configure a specific set of
prompts you can spe create a specific
set of tools that it has access to and
then you can call it the challenge with
that though is you don't really have a
lot of control over fine-tuning the
prompt outside of that environment so
outside of that environment you have to
just use what was published which means
if you don't have access you can't
adjust the prompt you can't adjust the
model you're looking at you can't adjust
any of the parameters if that is your
use case and you don't need to then by
all means you can just drag the
component in from the toolbox and
actually just use it to talk to um the
open AI assistant from a cloud
perspective
there when we start getting to the
advisor
though there's a few elements U that we
are configured to bring this to uh to
life so to speak I'll run through them
here and then I'll actually take you
into an example for them as well so the
first is the result so where do you see
the AI ADV visor pieces um where can an
end user interact with them and how do
they visualize and see some of the data
for that as well the second piece is how
can you change and Define edit some of
the prompts that you're looking at you
want to be able to have fine grain
control over which model you're running
what are your prompts how can you change
them how can you shape them um as we're
going on the uh the AI Journey here and
interacting with large language models
prompting is one of the key elements to
make sure am I getting the right
information out that's relevant to what
I'm asking or is it just going to start
giving me incorrect information and
hallucinating the the other benefit of
creating advisor is you can create these
what we call experts in specific areas
so I can have an expert in rotating
equipment I can have an expert in
quality I can have an expert in safety
which means by narrowing their U scope
by narrowing their task Focus you help
to minimize a lot of the hallucinations
um that they can have as well if you
keep it too generic and too broad what
you will find is they will start
hallucinating just like we do um and the
results are not going to be what you're
expecting the third element to this is
we're actually using
recommendations so we created a generic
recommendation that we are tying to the
data streams that allow me to push it
and make it available to the first
screen that you saw here which was the
actual um an app and last but what by
definitely means uh most important piece
here is how do I actually create this so
data stream underpin everything that
we're looking at so that is the
information of where I get my data run
my models and actually present it to the
end users on the other side so let's
jump into an example and configure what
I'm looking at over here so the first
piece of the puzzle is let's have a look
at the data stream and then I'll work
backwards from
there when you look at this particular
data stream I'll draw your attention
right down here to the actual um the AI
piece and what it's actually doing so
this stream when it's running is getting
data coming from an endpoint now again
this endpoint can be anything there are
a lot of different listeners in here
from a library perspective so all you
have to do is drag it on if you don't
want to use mqt you want to use opcua or
a historian by all means you can do that
as well you'll see we're getting the
prompt details in here so we're reading
the prompt details from a data source
this is coming out of SQL can it come
out of a graph database yes can to come
out of another system yes you want to
store your prompts in CSV flat files and
read that in you can do that as well the
key piece here is you want to be able to
have a generic mechanism to get your
prompt Details
In The Stream here is also using
recommendation so if I have triggered a
recommendation for this particular asset
do I want to re-trigger one or do I want
to even pass the information to the
model and have it run no I don't the key
thing to remember here
is you don't want to push every single
thing through at a speed of you know
every 1 second to a model because models
do take some time to run if you are
running this model in the cloud and you
want to get the per second you can
achieve that yes it's going to be costly
if you want to run this model U locally
again you can achieve that but it's
going to be costly in infrastructure
costs the reality of the use case here
is to work out under what condition do I
actually want to pass this information
to an advisor and get a response back so
what you'll see here is we've actually
got a filter that says if there's radio
recommendation for this asset that was
triggered by this AI advisor we actually
don't want to pass anything to the model
we just ignore it because there's
already a recommendation um that's been
triggered if a recommendation has not
been triggered then we're going to pass
it to the actual model that we're
looking at here it's going to evaluate
what we're giving it and it's going to
give us the information and we are then
going to run a recommendation and update
the
recommendation if I double click any of
these there's nothing weirdly
complicated about this the actual
configuration will just walk you through
what you're actually looking at here
it's the exact same data streams uh for
those who have used these in the past
that you are familiar with all that
we've got access to here is a new agent
in the machine learning side of things
so you'll see here is an AER open AI
I'll scroll a bit further down here is
aama Agent there is the open AI
assistant so all there are are new
agents in the actual library that you
can just use in the same way that you
are used to um in the
past if I double click the SQL that I've
got here you can see we're looking at
the prompt if I double click this here
this is actually pointing to a local
model you have a few different options
when it comes to U how do I want to
configure an AI advisor as I mentioned
earlier I can go and push this to an AI
assistant um in open AI who you can
train as an expert get the results that
come back and then add all the different
other recommendations um around that and
then you can get a AI advisor piece
coming out of that the challenge with
talking to to the cloud ones is there is
a cost around the
tokens so it does not matter how many
tokens you send there is a cost to
compute that and get the results for
that to come back the other option is
you can run models locally so this is
actually running locally in uh the lab
that I have set up here and I can pass
the URL on the model in dynamically I
can also pass all the um system prompt
and the user prompts in dynamically as
well if I want to change out it's
actually pretty quick and easy for me to
do that if I want to adjust for instance
model temperature um I can adjust that
as
well when the stream is
running the Telemetry details coming in
here we'll send our information through
what you'll notice here is I have a
pre-filter so what I'm looking for is
I'm only looking for assets that have
gone above over and above or below
certain thresholds that I've to find I
don't want to use the llm to actually
filter out noise I actually want to
filter out the noise before I get to
should I pass this to a um large
language model for it to evaluate
further um otherwise you're are going to
just be wasting resources down here and
passing it noise and hoping it figures
out some of the pieces as well can you
do that you can again you're going to
have to take into account token costs if
this is going into the cloud or you're
going to have to take into account
compute costs from a infrastructure
perspective locally if you're running a
local one
so this is what sits underneath the hood
data is coming in we are getting some
prompts we're also using the
recommendations to help weed out before
we get to run the actual model
itself the result where do I actually
see the result of of this if I look at a
particular app that I've got over here
it's the same app that we used to and
and have been looking at in the past on
the right you'll see here are the
recommendations if I click the top
recommend Commendation this will open up
a recommendation that was triggered by
the AI advisor so this is the result of
that data stream that I'm looking at
here you'll see in the middle piece here
AI response this is what the actual
large language model gave me when it ran
through this particular data stream and
triggered a recommendation
here so again this is how an end user
can interact with it if we go back one
more step so I've got a data
stream the data stream was actually
triggering a
recommendation so if we go into well
what does that recommendation look like
exactly the same as you're used to we
have our recommendations you will see
they are configured and set to a
particular data stream so it's exactly
the same data stream that we are looking
at
here and we have created a generic um
recommendation for it you still got
access to all the detail and value so I
can be a lot more specific in the
headline and descriptions with the data
that is passing through the data stream
as well so exactly the same behavior you
used to when you're configuring
recommendations for other use cases this
is just using it from a generic
perspective to create an AI advisor that
I can use on the apps
themselves if we go back to the
app this is a recommendation if I go
back one you will see it'll take me back
to the um application itself and I've
got access to all my other
recommendations now for those who are
familiar with some recommendations this
may look a little bit
different this is a recommendation uh
with a different template that was
applied to it you can still use the
prior recommendations and if that is um
where you want to put all of this but
you don't have to you can adjust the
visuals to other types of visualizations
for the same set of data so here I still
have my event data as you used to I'll
scroll a bit further down I'll still
have my analytics I can still have my
notes coming in here as well across the
top you can assign it false positive or
resolve the recommendation as well so
exactly the same capability present it a
little bit
differently this is the recommendation
that triggered for it if we go back to
the stream that gets used down
here the recommendation that is
triggered gets read in here which stops
me from talking to the model too many
times let's have a look at the prompt
over here though so if I have a look at
the prompt you'll see it's coming from
SQL there is a actual application that
you can configure um on top of that as
well if you already have a prompt
Library um and you just want to read
that in we can do that as well this one
pretty simple and straightforward what
model am I looking to run what's my
model temperature what is my system
context for that and my user
context system context defines that you
are an expert in whatever for this
example it's rotating equipment so he's
an expert in the pump efficiency and
optimization you can create an expert as
I mentioned earlier from a safety
perspective from a quality perspective
maybe there's some static equipment or
different types of rotating equipment
you can also create an expert in the
process flow so let's take a
manufacturing example you can create an
expert
and the actual flow all the way from
start of manufacturing to the end of
manufacturing as well user content is as
you are used to when you are creating
your own uh conversations with your gpts
uh which pieces do we actually want to
plug
in when we are running this through the
Stream So when we say user content here
what we're actually meaning is in this
instance the user content will actually
be coming from whatever source of data
it is there's no actual user um as a
human that is sitting and talking to
this your machines are actually talking
to it in this example can this read from
a human yes it can if you want to
actually expand the um AI assistant that
I mentioned earlier with the
recommendations and actually have a
human ask similar questions you can do
that as
well so we're getting in the prompt
details and if you have a look at how
they have been defined II assistant with
an expertise what is your top askask
that you are looking to do and what
should you be
considering depending on which model I
want to talk to I may want to change and
tune my system prompts I might want to
change and tune my user prompts I can do
that quite easily over here save the
changes and whatever the interval is
that we've defined here so if I go back
in here and have a look every five uh
minutes it will get the new prompt if I
want to change this and say you know
what I'm actually trying to split test
some prompts I can change that to every
5 Seconds as an example which means if I
save my new prompt details away I just
need to wait 5 seconds and then it'll
load in the the new prompts and I can
then evaluate that I don't need to stop
and start a stream I don't need to stop
and start and publish anything I can use
an interface like this just to update
and change what I'm looking at the
benefit of running
a uh a local offline um Repository like
olama is I could have one prompt that is
using an olama you'll see I have another
prompt here using SQL coder as a model
Etc so I can have different models
running for different types of assets
for different use cases all running
within the same structure that I've got
defined
here so this is the prompt that is
sitting behind and it's being fed into
the data stream this is the
recommendation that we've configured and
set up that I can interact with and from
an end user this is how they can consume
and react to the responses coming in now
some key things here is it's always good
to make sure that you have a mechanism
to give feedback to the model as well is
this a good response or a bad response
um for it so it can also help learn that
the next time it runs through it takes
that into account as
well if I'm looking at this and I
actually want to have a conversation to
it can I do that as well yes you can
you'll see on the right here I can start
having a conversation and it can take
into account this response it can take
into account the event data
and and my horrible spelling as well so
information in the middle how we got
that that we got that through the data
stream triggering that recommendation
there this model was actually coming off
of a local
model where over here as this is running
this is actually talking to a um chat
GPT I think this is running chat for all
and I can have conversations with it
with different data again prompts as you
can move these around the data as you
can move them around all we're doing
here is we're just putting the different
pieces together
around a use case um that we call as a
an AI advisor on the data that is coming
in some of the information that you
looking at in here as well you can see
some of the citations coming in as well
so again grounded in where's my
information coming from so that I can
trust the data as
well so what do we just go through from
an AI visor perspective there's a few
different elements to to consider here
the first is the
visual where can I consume the data can
we pass this AI response to other
systems yes we can this is just an
example inside X Ando we can pass this
to any Downstream system uh that
requires it needs it or once
it the next piece is the actual prompts
themselves how do I create the prompts
change prompts maybe I want to do some
prompt chaining in here how do I tune
the prompts for the situation ation that
I'm creating or want to uh help
manage if we go one step further the
generic recommendation so here we're
just using our recommendation engine and
creating a generic recommendation for
the AI side can you be more specific and
actually create recommendations for
specific asset classes yes you can can
you hook this into existing data streams
that you may have to finded you can do
that as well this does not have to stand
on its
it fits in with everything that you used
to um and have configured up to now as
well and then the last is the data
Stream So what sits behind this what is
actually consuming the data bringing the
prompts in Reading recommendations
passing it to the model getting the
result and then passing that to the
um the recommendation so that they can
be triggered as well the last side I'll
leave you with is again what are all the
different pieces when it comes to to the
AI side of things for industrial
operations there's generic
algorithms you can bring your own model
if you want to do just simple regression
Etc there is notebook capability and as
I mentioned earlier there is a webinar
that we have done previously which will
walk you end to end through that as
well this is where we start getting into
some of the large language models so
there are agents for for
instance um o llama to run it locally
there are some to run open AI um and
that library is expanding as more models
become available on the far right when
we start dealing with the advisor side
of things how do we do that on top of
xmo recommendations how do we also start
doing that on top of your own
documentation as well so how can we
perform rag which again is retrieval
augmented generation on top of your own
data so we want to use your own data
your own manuals Etc as part of that
advisory uh
capability and then bring your own model
maybe you've defined your own model and
you don't want to use one of the
available ones out there you can bring
that in as well and just some of the
other future work that we are working on
right now is around the agent side of
things so how do you create a generative
agent a directive agent and also moving
towards multi-agent systems where they
are self-organizing as
well and with that I thank you for the
uh for the time today thank you for
listening to today's webinar I hope you
have a great rest of uh rest of the day
and we'll see you again on one of our
future webinars thank you
Last updated