Reliability Gang Podcast

The Future of Reliability - Why AI Can't Replace Maintenance Experience

Will Bower & Will Crane

The reliability landscape is rapidly transforming, but are organizations truly ready for this evolution? In this thought-provoking episode, we reunite after our individual "side quests" working with different customers to discover we've both encountered the same concerning pattern across manufacturing facilities.

While technology advances at breakneck speed—from pixelated camera phones to sophisticated AI systems in just a few decades—the cultural foundations of reliability and maintenance strategies are evolving at a glacial pace. This creates a dangerous disconnect: companies eagerly investing in IoT sensors, machine learning algorithms, and data historians while neglecting fundamental reliability practices that would make these investments worthwhile.

We share a sobering case study of a critical cooling tower failure that cost over £100,000 in emergency measures, despite previous recommendations for a £2,000 monitoring solution. This exemplifies how organizations often pursue technological silver bullets while overlooking basic maintenance fundamentals. The cooling tower had never received lubrication despite being a lubrication-dependent motor—a simple practice that could have prevented catastrophic failure.

The episode explores why AI and machine learning belong at the top of the reliability maturity assessment, not as shortcuts to excellence. While these technologies show tremendous promise, they require clean historical data, established maintenance practices, and experienced professionals who understand failure modes to interpret their findings. Without these foundations, organizations risk drowning in information while suffocating for wisdom.

Whether you're a maintenance professional navigating digital transformation or a leader making technology investment decisions, this episode offers crucial perspective on balancing innovation with reliability fundamentals. The future belongs to those who master both.

Support the show

Speaker 1:

Hello, Reliability Gang welcome back to another episode of the Reliability Gang podcast. Hope everyone's keeping really well. It's been a long time since we've actually been in here together, but we're actually in the office together with a day to actually strategize and go through a lot of things. But do you know what? It's really nice sometimes when you know we haven't actually done a podcast in a while, but we've been out in the field, we've been helping customers, we've been right in the thick of it and you've been on your side quest. I've been on my own quest and I don't think we've actually been together for a long time. So no, we're going to use this as an opportunity to come together and, like some of the projects we've been working on, especially with data and kind of the future, we're going to try to mold our experiences together and actually hash out a really good podcast from it.

Speaker 2:

Do you know what I mean. It gives us a chance to, or the last few days that we've been able to have together, which has been a bit of a rarity, but it's given us a chance to kind of reflect upon some of our individual, like you say, side quests. But actually how? The things that you've been dealing with with these customers and I've been dealing with with these customers?

Speaker 1:

actually they're kind of or I wouldn't say problems, but they're very similar. Yeah, exactly, yeah, 100%. And I think as well, you know, when we look at, kind of the way that maintain reliability has evolved. We're very much into trying to be effective, you know, and how do you be effective? Well, you've got to have a strategy and you've got to be able to understand the data and you've got to be able to understand what you're looking at and how do you create a strategy that is able to be able to cater to that strategy in the most effective way. And we understand as well that big data is coming. You know we've got sensors. We've got the. You know we've got AI models. Now we've got technology coming, increasingly rapidly changing, and I remember, like the first days when we first had our like smartphones or or camera phones, not even smart.

Speaker 2:

They weren't even smart then they weren't even clever.

Speaker 1:

But I remember the first phone that came out with with the camera, and I remember when all of our mates were taking photos and it was just pixelated but you couldn't even make what the photo was, to be honest, it was so pixelated but it was such an innovative thing to be able to take a photo on the phone. But how quickly that evolved, you know. Every single phone model came out after that was. You know, you're doubling the megapixel on the phone. They used to do it by megapixels, didn't they? And that was the selling point of the phone. Like you've got a 16 megapixel camera and the next version would come out and samsung would be battling out with their tech. It was nokia the time. I think down to it. Obviously things are how things have changed, but that was a tech revolution. But how fast. That move was actually crazy we're in.

Speaker 2:

We're in a very like rapid development period at the minute. I even think like back to some of the older condition monitoring technology, like when thermal imaging had all the separate battery packs like we weren't too much of. We kind of got that right at the end of like coming into CM. But even that technology I don't feel maybe the guys that were within it I don't feel like it was moving as rapidly as what it is now. Like really from the phone generation technology has just gone ding ding, ding, ding, ding, ding ding.

Speaker 1:

It has yeah, and you know, even within the industry, that we're in and the thing is because that moves so fast and it always will, because you know, human innate nature is always to be able to innovate and improve, improve, improve. That's always what we're driven to do and the funny thing is, on that side of things we feel like we're improving, but on the other side of the culture, side of the reasoning behind why we do maintenance and some of the effective practices, it's moving very slow. So what we've got is this very weird conundrum where you've got the technology and the ability to be able to monitor certain things moving very fast.

Speaker 2:

But business is a slacking to try and keep up with it all Exactly yeah. We've definitely seen that across multiple sites all the side these things that we've been working on. One of the things we do as part of a reliability piece is look at a reliability maturity assessment and usually it's break down into four stages, with. The fourth stage really is where, at the bottom, there's one section that encompasses IoT and big data and machine learning, but most businesses are barely at one or two, I know.

Speaker 1:

I think it's quite. I love that maturity kind of assessment because it is really interesting to to to kind of almost as a as a data model, see where they're at in terms of how ready they are for a program to start, and we are seeing the repeat same thing across multiple companies Jump ahead, jump ahead Exactly.

Speaker 1:

And we we spoke about, you know know, the implementation of condition monitoring in general, because it's quite easy to implement, it's quite easy to get us to come in to, to go do some testing and find the problems and issues. But generally what we do find is that people will do that as a bit of a box tick to say that they're doing something and then what they actually find is that their maintenance systems are nowhere near ready to even supportive enough to know to even accommodate the information.

Speaker 1:

I mean one of the particular sites that you know we've been struggling a little bit with. You know they want to try to go down now a full PM plan as well. But also what they don't understand is that even with the PMs identified of problems, you still have to do something with that as an action to improve the maintenance and reliability of your plant and they're not looking at eliminating the root causes.

Speaker 1:

Yeah, there's no defect elimination there. It's a case of oh, just find problems. I want you to find things. And you know, in that whole culture change process it's really difficult because we're trying to offer them a better way of doing things. But we know that if they don't take this advice, in terms of reliability, centered maintenance point of view, what we're doing is actually not going to add any value, and the business and those within it have to want to change yeah, I know, and that's that's.

Speaker 1:

I feel like we are battling with a few customers like that at the moment and I think consistency and resilience is key in this game. Even for us, it's something that I've probably had to be massively for Maintain to be where it is now. We've had to be so patient in terms of have faith that these results and this philosophy we know it works. We do know it works. We know it's not easy, but we also have to have faith that the customer at some point is going to get to a turning point to think we can't carry on the way that I think we've got a lot of customers other than just the ones that are a bit more challenging.

Speaker 2:

We equally have a lot of customers that we have been doing good condition monitoring for three, four, five years who have now gone. Reliability oh, you can help us with that.

Speaker 1:

We'll do that as well, oh my god, we've been, you know. But to be fair, you know, we've only really been in a position to offer this philosophy and the way of working. I mean, I think we've always had rcm center mindset, but we don't think we've actually necessarily knew how to deliver that necessarily until we did arp.

Speaker 2:

You know there was no plan until then and that's what arp or the art framework that mobius had developed. It gave all of these tools and techniques that encompass of reliability. It goes no matter where the business is at on its journey with no reliability practices to maybe some. It gives a framework on how to go about and achieve that to ensure it's sustainable. The people are brought onto the journey. There's an awareness and a buy-in, because you can't just do it in isolation and a lot of businesses fail because you've got one guy on site.

Speaker 1:

That's like I love reliability and everyone else doesn't understand it, and no one else understands it and it creates resentment in some places as well.

Speaker 2:

It does, yeah, it does.

Speaker 1:

Everyone is against that one particular individual and we've seen that firsthand, you know. And seen that firsthand, you know, and and the intention also is really is discouraging, because the intention of that reliability engineer is actually caring about the business he works in and he's caring about this, the future longevity of that, and knowing if, if we implement some of these changes, the business can be around for longer, it can produce more effectively, it can, it can earn more money for its shareholders and therefore be more successful. But but you almost get you know the other side of that completely, almost going against it to almost self-sabotage the longevity of the plant, which is also crazy. But again, if they don't understand what they're doing in terms of, you know, stopping it from happening, how are they ever? This is what the culture change is about and this is where people are so driven to what we're doing and decision making within the businesses that we operate. Do you know what I mean?

Speaker 2:

So, like one of the things that I think we've both come across quite frequently and we were kind of having a bit of a not a debate together outside about how we feel that the industry is moving we definitely see a lot of businesses that are now wanting IoT technology machine learning, ai. They want to see how their business can utilize that. But obviously we go in and go, yeah, but you've got all these red flags over here that you're not worrying about, but you're going to monitor all of this over here. How do we deal with those sort of things?

Speaker 1:

It's troublesome because, you know, I had a great conversation with a head of engineering, a particular food manufacturer, and I can't really name any names, but it was. It was interesting and he he was new to the business, right, he'd just come in. His viewpoint was complete 4.0 everywhere, 5.0 everywhere, monitoring, monitoring on everything, smart manufacturing, everything. That was what he wants. You know, that's where a lot of these systems are being sold to, of what can be achieved and we can achieve that. You know we've got the technology to be able to do that now as well. But what was really funny was that before he got involved with that, you know the engineer, manager of the particular site he wanted to put some online systems on some real critical, just continuous monitoring.

Speaker 1:

Continuous monitoring on some real critical compressors and stuff like that, and that actually got blocked by the business to be able to do that okay, which is ironic, because when we're having this conversation it was a bit like okay, if we can't even get some monitoring on your most critical assets on one particular site, let alone everything else inside the plant and all this other stuff, do you know what I mean? How are we ever going to justify what your vision is? And how much head engineering, how much play and how much weight do you have to be able to make this decision as well? Can one person change the whole culture of a whole business? I don't know.

Speaker 2:

I don't think it's possible personally and I feel like, um, one of the things we talk a lot in reliability about is that whatever the pm or condition monitoring technique that we choose to do and the frequency that we're doing at has to cost less than accepting of the actual failure. And with the advent of now, uh, a lot of businesses are not just looking at a sensor that wants to collect data, which we've always said we find challenging sometimes because there's no actual benefit to just an increase in frequency of data collection. So because of that, it's one of those things where all this storage of information, yeah, and we've seen more businesses want to collate all that to try and apply ai to it.

Speaker 1:

This is just cost and cost and cost and this is the thing, I think, where a lot of people think the cost comes from is the hardware. That's not necessarily the case anymore. We know that electronics is becoming stupidly cheap now. Do you know what I mean? Like, some of the manufacturing of chipboards and even sensors is extremely cheap. Do you know what I mean Really, on a point of view of how much these things used to cost or what they do now? The cost is in the storage of data. You know, and even we had you know even AI and chat GPT how many people are using that now has actually increased the energy on this earth in terms of like and that storage only is going to get more, because what we're seeing is that we're going from wireless sensors and a lot of the wireless or continuous monitoring systems now are very open.

Speaker 2:

They can send their data to wherever they want. A lot of the bigger businesses are looking into how they can leverage AI and machine learning, but one of the things we actually Jason Trant talked about it in Barcelona about AI and how this is moving forward. But you know, chatgppt is very effective in in the whole asking of questions because it has the whole internet, the world wide web, as its data source to pull a lot of information and eventually we want to be in a place where we can utilize that within a maintenance plant. But it needs lots of data to pull from. It's going to need all of the condition monitoring data, but it also needs all the data. But it also needs all the maintenance history, but it also needs all the process data. So a lot of these businesses are now looking at historian systems to be able to pull all that information across and then AI platforms to process it.

Speaker 1:

How far are we off that, though, say for you know, because a lot of companies where we go to have great history. You could probably utilise that and use it, but I mean a large percentage. You know, some don't even have cms systems. So say, if they started now and there's like right, if we log all of our maintenance actions and they, they did it properly and they had all that log and then they start collecting data, how long would it take for them to actually really understand, because you know, even even look at some reliability studies of like meantime to failure, meantime, you know, before failure and all these type of metrics. They require a lot of history behind them for you to be able to get accurate results right. How much, how long would it take for a company from scratch to be able to to have this information for before it could be utilized useful? I mean, you're talking at least five years, aren't?

Speaker 2:

you, I think it might in my head even a business with lots of information and lots of data. What I think the the overall program of AI and machine learning. I feel like history tells us that whatever time period we feel it's going to be, it's going to be shorter, especially with technology. I feel like it was only a few years ago that we were like, oh you know, I don't, I don't know if AI is going to be 10 years away, and now we're already thinking about five years away. So the time can't get smaller. It gets smaller, right, and so, either way, a business ultimately in order to. It's why ai and machine learning is on the end of the reliability maturity assessment, because to capitalize on that, you've got to have all the other good data.

Speaker 1:

You've got to have good cmm system also. You've got to have good data. You can't just have loads of crap data in there. Do you know what I mean?

Speaker 2:

this is the. This is, then, the argument I keep having in my head, which is that one of the things we spoke in barcelona was that, um, ai, you know it, for it to be able to work and in its in low level form is it's got to be able to look at, you know, cms data and vibration data, and coordinate that to put an output. But we say, oh, the data's got to be good, but what stops the ai then being able to evaluate whether the data is good?

Speaker 1:

and then comprehend and then comprehend whether it's obviously filter and screen out poor data, right so?

Speaker 2:

what stops the ai? Like, if the ai sees that vibration has decreased on this asset, uh, there's two pumps, this one's decreased, this one hasn't, and one of and the work order has been said that they changed the motor on the other machine. What's to say that the ai can't statistically review that and go oh, actually I think they've put the work order on the wrong machine very clever yeah, I mean again though.

Speaker 1:

But you say, all this, are we going way too far with all this stuff? We need to, because I think me and you also know even with a good, if we do good pms and we do good strategies, all right, and we really look at failure mode analysis, we do good for me because we do good criticalities, we keep it effective, we keep it simple and we keep it driven right. Yeah, we go across a. We do all these great things. We understand exactly where to focus our attention.

Speaker 1:

Then we do the right kind of failure mode analysis, then we put the right PMs in place. We do the right CBM practices for them, failure modes and everything like that. We then choose the right monitoring systems for the criticality. We don't have to go too far into it. You know, if monthly VA is good enough for these certain assets, we do that because we know that's worked for an amount of time as long as the culture is right within the business. Do we really need to go to the extent of where all this is going? Do you think that we're getting kind of almost sold into this kind of AI revolution, when the value is not actually in that? The value is in the strategies of what we've actually proven before.

Speaker 2:

I think it is that, being that AI is the buzz and it's the thing that's going on right now. I think, if you're a bit this is my opinion if you are a business and that you have all of your reliability things in place, you've got good maintenance strategies, the business all is on board, you are considered world-class reliability, AI, machine learning and everything that offers is the froth on the top of the beer.

Speaker 1:

It just gives you a little bit more.

Speaker 2:

And if you can capitalize into that, the benefits you can do is huge. But it is at the top of the froth.

Speaker 1:

You only get that benefit when you've really done all the strategies. Exactly because what I feel a lot of companies are doing they're coming into this tech revolution, they've not done the basics, they've not done the strategies, they haven't got the culture right and they're trying to blanket approach ai solution or data model on top without really knowing what they're effectively doing is those businesses that are world-class reliability leaders.

Speaker 2:

They will be able to put and it might take them five years because they're trying to work out the best solution to do and and one of our customers is looking at this right now is is how can they do this for the future? They will massively capitalize on that. It will. The ai system will probably in the machine learning and the sensors. They are not. They're not blanketing sensors on because they're waiting, because they know the technology is advancing rapidly, or what they're actually thinking is actually they're looking at moving all of the assets onto fixed sensors because they know that the fixed sensor hardware has been stable for many, many years. 100 millivolt per g. We always use it.

Speaker 2:

Yeah, I mean, we've installed them so it's very stable you know and we're looking at the fact that you know if the whether it's an mbx system or whatever, which can do more than just vibration, you know if that hardware does evolve. Usually the hardware evolution of that is maybe five, six, seven, eight year cycles, if not longer. So they're not going to be like constantly outdating themselves, but they will, will benefit from AI. But what I see is happening is that a lot of businesses are looking at that reliability, maturity assessment and all those things that give good reliability, good maintenance strategies, good onboarding, good condition monitoring. They see the AI as that solves my problem.

Speaker 2:

Yeah, if I do that, I solved all my problems. And so they're jumping and going. I'm going to put sensors on everything, it's going to monitor all my machines, I'm going to put it into ai and it's going to tell me everything I need to do, and I'm never going to have a failure. I'm going to be well on top of it. But you and I both know that it doesn't detect all the failures if the data is poor, if the debt, and so they're just seeing it as a oh, that's the solution. And some of them are seeing it as the solution as a quick, like I'm just going to do it and some of them don't realize there's all this and I think you have to have the clarity behind it to realize it's not and.

Speaker 1:

But the problem is, I feel like there's there's less people that have the clarity than do. Yes, you know, you know you've got a lot of data companies. You've got a lot of. You know people in IT that are probably very clever people but don't necessarily be in our industry and understand how machines work. Look how many years experience that we've had, like even you at British Sugar, me in the repair centre, you know at Eric's for how many years seeing failures happen. And all that experience it's happening. All that experience you can't just I don't, you can't just AI. That you can't, just you can't. That experience has been time served.

Speaker 1:

Yeah, do you know what I mean? And it's interesting because when we was in Texas et cetera, right, we was, you know, we was actually had the same, similar conversation. Okay, yeah, because we're speaking about how big data and how, how the model, what they want to do is try to get all that sensors and go into email and everything. But I was kind of not a debate in it as well, but I said, well, how do you understand, really understand, based off failure mode, though, and kind of a guy kind of looked at me and I was like you need to have people that have had experience of understanding how things can actually fail for you to really make this effective.

Speaker 1:

How do you build that into your AI models, do you know, I mean and it was a question that kind of like stunned them a little bit, you know, I mean, and it was interesting the reaction, because these guys that we were talking to very clever guys, were probably far more clever and been in this tree longer than me, but because they hadn't had that exposure to that side of failure and machines and all the stuff that we've seen as well and all the mistakes that we've made. You know, like, how do you learn from? From seeing a result that wasn't really going to go the way that you wanted to, and learning from that. But that whole conversation was based around the idea of big day, but when I had that question, it did pose a question of oh crap. You know there is that to consider.

Speaker 2:

Yeah, and you and I think, like as a real, not plea, but like just for businesses that are in these sort of spaces, I think if you look at the money that these businesses and some of it is scary that are willing to just plow into AI and machine learning and if you took that and applied it, to reliability, because reliability is about being effective.

Speaker 1:

It's not about being efficient. It's about saying how can I utilize all these tools to be effective as possible with the least amount of money and resource? Yeah, that's what reliability really is about, right, and I just feel we're missing a point where everyone's going towards this direction. It does cost a lot of money to have that model, but without having the real basis and foundation of it. Yeah, and and this is why these podcasts are so important, because I'm not saying that it's not the answer in the future- it's going to be amazing when it will.

Speaker 2:

It's going to be huge when it does.

Speaker 1:

And I think we've really seen that and we've also seen companies that are almost ready for it. You know, they've done all the great things. They've got themselves to a point where now like, where do we get better? Because everyone wants to improve. So, when you've done them, things like where do we? Where's the froth? That that that froth we're talking about. And I do think it costs more money to get when you're up the top there.

Speaker 1:

If you want to get that froth and then little small improvements, it does cost more money than it would cost you to get from a lower place to a higher place. So but if that money has a good return on investment, of course we're going to do it. And also, we all want to improve. Yeah, we don't just want to get to the 98%, we want to get to 99. We don't want to get to 99. We want to get to 100. You know what I mean. There's always that little wiggle room and I think we always, as human beings and as continuous improvement, got to think of that. But the problem is we're trying to get to 99 and we're not even at 70. And the 70 to 80 is the bit where we need to focus on.

Speaker 2:

And if we just do a good strategy and we can spend far less money getting there, with the right effective strategy we can get to where we need to be, and if you don't get that first initial bit of percentage right, everything that you're spending hundreds of thousands of pounds in NAIA is not going to have the impact. Yeah, exactly, very interesting. Or what will happen is that the money will be. What we're seeing at the moment in a lot of the bigger companies is that they are being more hesitant to it. They are being more approachable to it and they're doing a lot of testing.

Speaker 2:

But all of that testing and testing and using this system and trialing this AI platform or this historian this is license costs. And we're talking about one of the customers I've been to last week. You know they're testing that they've got a small team that are the digital lighthouse that they're looking into all of this. You know they're testing out licenses that are hundred thousand pounds a year license costs, but all looks really good. But we need to get this bit done first. I know the basics Because that same particular customer.

Speaker 1:

You look at the application you was looking at last week. Yeah, perfect example, isn't it? You've got a whole team of people, right, which costs money, right? Not even that they're testing these licenses out. That costs all this money. Then, on the actual reality of the plant running, you've got the cooling tower. Do you know what I mean? The history of this is? I'll let you explain. Yeah, so this is a perfect this. Do you know what, though? This is a perfect example uh, you know.

Speaker 2:

But but for us, we're doing the arp, we've done the training, they're doing the art, they've got a reliability team.

Speaker 2:

We're making some incredible great movement movement at the moment and the whole business is on board all the way up to the site director. So but fundamentally they do have a digital team and I do think for a business this size it's probably worth them looking into. But the overall global business is not got a focus on the reliability within the group. We're doing it independently, right, whereas I think that a business this size should have a VP of reliability and they should be putting focus on it, because what that means that they have that for the digitalization bit that they're looking into, but because they don't hear what that's effectively mean for this customer is they don't have actually any criticality assessment yet their most critical asset on site that if they lose it is their cooling tower and would shut the entire site. That's a complete like shutdown, shut down. They have no condition monitoring on it whatsoever, even though we said three years ago that we need to fix sensors on it, which they've been like. Yeah, we should have missed that.

Speaker 2:

The cooling tower motor is generated a noise level which was resulted in noise level complaint, which meant they've had to put a plywood scaffold wall up at the cost of 22 grand. They've had to hire a cooling tower in because there's too much risk in changing the motor and damaging the coupling. So they don't want to do that. So they've had to hire a cooling tower minimum eight weeks, 11 grand a week. So we're over £100 thousand pounds right now. Jesus christ, right to instant to this cooling tower, um, and there's no real maintenance strategy on it at all. There's not not, not just condition monitoring, there's nothing else that's in place to ensure. Then we go up and look at it yesterday and find out that it's a lubricatable motor. It's not sealed for life and they've never lubricated it ever. It's a 12 and you can't, I suppose. It's a lubricatable motor. It's not sealed for life and they've never lubricated it ever.

Speaker 1:

It's a 12-pole and I suppose it's a coolant house. You can't lubricate it.

Speaker 2:

Without grease lines. Without grease lines. Yeah, it's a specialist motor, it's 12-pole.

Speaker 1:

It's a 12-pole motor as well.

Speaker 2:

And.

Speaker 1:

I tell you they are not common, yeah.

Speaker 2:

And so, which is part of the reason so long for a spare, uh, and then they retrofitted several years ago an inverter but didn't follow the, the change management. But because they didn't have the expertise in the business, they didn't realize how the motor wasn't inverted duty and how that and what that impactful, yeah, yeah, but equally because it's on an inverter.

Speaker 2:

I said to them last week I said you've waited so long for this 12 pole motor you met you'd need to have looked into the efficiencies of it, but you could have just got a six pole and slowed it down, which would have reduced their lead time on their motor by like a third. Obviously, the efficiencies and stuff and I'm not a motor expert but it could have been done. And that cooling tower is one of three super critical assets within the plant that we've looked at that have no maintenance strategy whatsoever. I said it to the two of you guys, the reliability engineers and the technical manager was in the room at the time. I said just remember the way I'm going to say this is a bit nuts, but just remember your most critical asset on site. Your maintenance strategy is run to fail.

Speaker 1:

But this is a perfect example. It's a perfect example. We've got teams spending hundreds of thousands AI models, whatever and you look at the basics, the real basics of stuff that's not difficult to be able to identify and do and it's not being done. It is now because you know it is but it's cost them a hundred thousand exactly.

Speaker 1:

But this is the thing. If we did that strategy three or four or five years ago and we did it properly, the problem is as well people don't really necessarily care into this a problem. And this is life in general. Do you know? I mean little niggle on your car, I'll just leave it. You know, you, you hear it. That's the first sign, even for us as people. And it gets worse, it gets worse. It's like a crack in your wing screen. When it happens, you find a tiny little dent, do you know?

Speaker 1:

what, I'll, be all right, and it probably costs you about 50 quid to go to Autoglass to get it changed. All right, but over time you forget about it. It gets slightly bigger, slightly bigger, and then you're like, oh, I need to get it changed. And that's the perfect analogy that we find in reliability all the time. But the thing is we're not just dealing with a windscreen here. We're dealing with critical assets that have a huge impact if they fail right. We have to get ahead of the game. We can't carry on in this industry the way that we are, in this reactive way and that cooling tower as a prime example, to install fixed sensors on it.

Speaker 2:

We are in this reactive way and that cooling tower, as a prime example, to install fixed sensors on it is about two grand Four sensors, two on each motor, minimal cost. To actually develop the maintenance strategy and to do that piece of work is just a bit of time. It's sitting down, going through the strategy and working out. Have we mitigated the risks and put controls for these failure modes to ensure we have a level of control on this asset for the failures that we have?

Speaker 1:

yeah, and that's.

Speaker 2:

That's kind of doing some famika stuff really yeah using the techniques, but also what I've said to the guys bigger than that, and this is where reliability just broadens even further, that people don't understand the full concept all the time and why we do a lot of training around it is yes, okay, we've got that in particular. But now we're looking at okay, could we move the motor in the future over to sealed for life? That would eliminate the potential risk of a contamination issue with lubricant or the lubricant running out.

Speaker 1:

We can look at doing that, standardize the mower. So when we do that we have a spare yeah, maybe both have.

Speaker 2:

We we're going to be creating, when we do the job, the sops all at the same time in order to to make sure that that is all documented for that asset. Um, we're looking at all of the other elements, but one of the things that this the result of this has had a spare.

Speaker 2:

Yeah do you know what I mean. Like you know, one of the things that has resulted off of the back of this, though, is um, and we're now looking at it on another asset is that they've had to modify the pipe work to allow for the temporary cooling tower to be piped in, and that modification is now going to be permanent, so we can't prevent every single failure from happening. We can try our hardest and not. It's not always cost justifiable, but in the future, if there was an issue, we can easily pipe that cooling tower back in. We can have a temporary cooling tower, but that's something that businesses should be looking at in their design stages, which is when they're evaluating. We've got a major critical asset on site cooling towers it's extremely.

Speaker 2:

This, yeah, it's extremely unlikely that it's going to fail and we're going to put all the control measures, we're going to monitor the asset. But even then we may miss something. So and in that early design stages when we've got the maximum influence and it's the lowest cost, it'll cost hardly anything to install some pipe vanges and a valve that would is always shut but would allow a temporary cooling tower to be yeah, and I think that's the thing most with most design stages.

Speaker 1:

At that stage it doesn't cost a lot to make changes. But the problem is, I think a lot of even manufacturers as well, but not even at businesses. Because, if you think about it, a business has a requirement to be able to have a calling tower and needs a function right and needs to say right, we need to do this, and we need a calling tower to call down whatever we need to, to create what we need. So we need to find a supplier that's able to do that. It's down to the business to be able to identify what is required, also to make our life easier should any issues happen. And the problem is mate.

Speaker 1:

A lot of the time that's not thought about. We've had a new customer right that we've got. They've just got a whole new dust plant, okay, and they've not considered anything since they actually brought it in. And the problem is the specification for the people that come in to actually supply the equipment. None of this has been stated of what they want. They just said we want this. They've not actually stipulated that we want better access. We want to make sure that we can have a design where we can actually balance the fan Exactly.

Speaker 1:

And how many times have you seen that We've had to cut holes and retrofit certain things just to get to the fans? And again, I think some manufacturers are forward thinking and have that as a standard in their design process. But the thing is as well it's down to you as the actual company to ensure that the maintainability of these assets are as good as they can come initially. The problem is when you have to cut holes in fans when you're down because your fans is out of balance hideously. It costs so much time and money to do that in terms of just downtime, and not even that you're now making a modification to something that that probably shouldn't have been, you know, done in the first place. So I think, I think it is. There's all these things.

Speaker 2:

And then we've got digitalization.

Speaker 1:

Let's put sensors on everything I know and I think sometimes the solutions are very simple. I don't see it as complicated. I look at this sometimes I think this is common sense right, it generally is, these things are not difficult. But when you do look at it on a day-to-day basis and and us across, we've got a lot of customers now you know, and and we, we found this as we went, it's like why did they not do that?

Speaker 2:

Why?

Speaker 1:

did they do this? Why did they not think about that? And I don't know what it is. I don't know whether it's because they're in it and they get busy, and maybe we can have the same analogy of our business right. When we get busy, we miss the obvious things maybe, but I think that's why always, if you can have an outside perspective, if you can be open-minded to where you're really going wrong, you can use data to be able to justify it.

Speaker 2:

We say in the arp course it's unknown, unknowns. If you don't know that you don't know, if you're not aware that the amount of like when we did arpe with the guys that I was talking with this cooling tower, it's not that they're incompetent engineers, no.

Speaker 2:

And they are smart people, but they don't know that there's a better way of doing it. They don't know what this is, they don't know the process about it, and it's it's the reason why we do the podcast, is the reason why we share on LinkedIn. I had a chat with someone the other day. You know this isn't a vanity project for you and I, just because we like seeing our faces on there. I hate it, like will watches all the videos I don't like watching it.

Speaker 1:

you know I quite enjoy watching the podcast, but I don't know will does it but? But but you know the drive for it, and the real drive for it is is the awareness, because I think what we see on a day-to-day basis, we we understand that there's still a huge amount of culture to move and I think if we don't move faster, you know, we're going to have problems in the future.

Speaker 2:

And I said generally do you know, this is not.

Speaker 1:

This is not me being like you know, like Pisanic or unrealistic, this is just this is just our experiences, based upon how many sites we've seen.

Speaker 2:

I've said before, like you say, we're trying to shift this massive amount of culture and it's widespread in the UK. It's not just like individual sites and it's not just the UK, it's probably worldwide and we have the biggest impact and the most amount of voice on here and on LinkedIn to be able to do that. And I just I think that all that's going to happen is is as if businesses don't start to adopt reliability and don't start to look at changing their culture, one of their competitors will and they will start to fall behind, or that they're just going to constantly waste money and and and the amount of like, even like some of the values or that we talk about, like the amount of money that is wasted from not laser aligning things in in like it's a stupid figure. It's millions of millions.

Speaker 1:

And it's the small things that make the big differences. Right, and I think you know what a perfect way to close a podcast. And, as that food for thought is, within this huge information pool, we've got let's not, let's, let's not drown in information, right Whilst we're suffocating for wisdom. I think that's a good little saying within that. Okay, so, guys, thank you for tuning in. What a great discussion and, yeah, we'll definitely be back soon with some more content. And thank you for tuning in.

People on this episode