Reliability Gang Podcast

FIX THE SYSTEMS NOT THE SYMPTOMS

Will Bower & Will Crane

What if the problem isn’t your data at all but the way you’re trying to use it?

Too many plants are stacked with sensors dashboards and alerts yet they are still fighting the same fires every week. I see it constantly. Technology gets layered on top of a weak maintenance strategy and everyone is surprised when nothing really changes.

In this conversation I speak openly about an uncomfortable truth. Sensors do not fix broken thinking. Structure does. Planning does. Stopping defects before they ever enter the plant does.

We talk honestly about the skills gap and how it quietly drives reactive behaviour. No software will ever replace judgement that is built through training experience and time on the tools. From route based vibration done properly with strong visual observations to acceptance testing that catches motor issues most sites never look for like impedance imbalance these simple disciplines reduce risk and cost when they are done consistently.

We also have a real conversation about wireless versus wired monitoring. No hype. Just reality. Load variation mounting quality battery life signal stability over time. The message is simple. The method has to fit the failure mode and the business need not the trend.

To help decisions stick I walk through modelling that turns engineering sense into numbers leadership can actually trust. When you map functional failures consequences and detectability you can compare strategies clearly. Run to failure redesign handheld condition monitoring or online systems. You can see the cost the risk and the expected failures over time.

What comes through again and again is that reliability improves fastest when we focus on fundamentals. Better design better planning and scheduling precision alignment and strong repair standards. We share real stories including a blower that went red within days despite being monitored to show why reliability has to be built in from the start.

If you are tired of dashboards that look good but change nothing and you want to focus on the actions that actually improve outcomes this conversation is for you.

Subscribe for more honest conversations about reliability.
 Share this with someone who loves tools more than systems.
 And leave a review with one change you are committing to this quarter.

Support the show

SPEAKER_01:

Hello, welcome back to another episode of the Reliability Gang Podcast. Happy 2026. How are we keeping? My right hand man, Will, of course, is with me again. How are we keeping, buddy? Doing very well, thank you very much. Are you rejuvenated, Will, after that much needed break that you've just taken?

SPEAKER_00:

Definitely. It was very good. It was nice to just chill and relax and uh spend time with the family. Did everything that we needed to do, so that was nice. And then obviously we had our little day together before we all came back in January, which I think was a really took stock of last year.

SPEAKER_01:

Oh wow. I mean, yeah, we every single year what me and Will do basically we get together, we kind of set goals on we for the year, we set kind of the expectation, we set the standard, we kind of set, you know, I'm a very spiritual person. I've always kind of goal setting for me has always been really important about how do we A set goals, but B, how do we actually write them down? How do we actually get to where we need to go? Because goals are really difficult to make happen if you don't really know the vision around it and what you're going to do. So we kind of have a whole day where we sit down and we're like, well, what do we do this year that worked really well? What didn't go very well, which is always a few things, you know, let's be honest. And also, where can we improve? But also, what do we want our business to look like? Because we know in this industry, and everybody can agree, that things are changing quite quickly so fast, you know, when you're talking with like technology introductions and the skills gap shortage, and how do we actually get around a lot of the problems? And I think we've really tried to mold in our kind of our business to the landscape of what is actually around, and how do we actually overcome some of these challenges? You know, isn't that right?

SPEAKER_00:

I mean the business has evolved and changed, and from when it was just doing vibration analysis and condition monitoring to reliability to the repair centre, you know, we've had to adapt to I think what uh the needs are right now. You know, we know where the industry is going and we know where technology is going with AI and and and wireless sensors and how all this exciting stuff, but you know, there's a need right now with a lot of customers, and it's not really that. Uh, and and it's trying to mould the business around, trying to deliver what customers need in order to achieve what they need to achieve, yeah.

SPEAKER_01:

And I think that's I think our talk has inspired this podcast. We we had a good chat today, didn't we? And we want to try to one of our goals actually, as well, with me and Will, because because our business can be quite chaotic and it can be quite reactive, actually, as well. A lot of work comes in, a lot of fan balancing, a lot of oh, we need this change, we need to do this. And generally, sometimes when we find quite horrific problems of vibration, you need to solve them problems fairly quickly. But one thing me and Will said, what we need is structure. We need structure to the podcast, we need structure to what we're talking about, we need structure in terms of what we're doing. And it kind of got me thinking, because every really good consistent CM or maintenance strategy, what does it need? It needs structure, yep. It needs an uh understanding of what we need to do, what do we need to do well and consistently, and how do we make sure that we're continuously you know putting them things in when it comes down to maintenance? Because when it comes down to maintenance, it's the consistent small things you do every single day that end up becoming the real reward systems in the future in terms of that proactive maintenance. And and we really thought about it as well, and we really thought about how do we run our business and how do we also make sure and ensure that we do apply consistent strategies that are gonna pay off in the end, do you know? Because that's where a lot of I think a lot of companies right now we've we we spoke, they're struggling, they haven't got the consistent maintenance actions in place, they haven't necessarily done the strategies right to see well, where do I focus my attention on? And you know, we're all kind of you know victim of this kind of reactive mentality to a degree, aren't we?

SPEAKER_00:

Yeah, it's very easy, I think. Like we we definitely find it, especially um in as the business is going along through the days, and it even today is an example of that. And we're sitting here, you know, yep, we're gonna do the podcast in the morning, and then other things come up and it's one o'clock, and but uh ultimately we're we're trying to now as the business continues to evolve. We did a very successful last year. We we really focused on lots of elements within the business on on improving those, and I think this year is a continuation upon that, but it's building upon it even further. And one of the things that we know this year is is now you know, really trying to solidify the identity and the brand that is maintain reliability. What is the what what do we provide? That's why we've launched the new website, which you guys can go check out.

SPEAKER_01:

Will has worked exceptionally hard on it, so guys taking its time, but it's have a look at it, though, because like I think the way you've laid it out, the aesthetics of it, it's a very beautiful looking website, it's very functional. We've already thought about kind of what you know, what do we do? How do we kind of put that into a web based portal type kind of layout to describe the products we sell, the services we provide, the value that we also kind of contribute within the reliability game. But it's a beautifully well laid out kind of brand piece from us both, you know. And obviously, Will does the implementation division kind of comes in terms of we'll put a photo here or a video here, we're gonna talk about these things, but it just really represents us as a brand, I think.

SPEAKER_00:

I think so. I think it goes back to like um when we first uh one of the things that we first used to talk about back many years ago was this idea of that uh I think it was from Story Brand Marketing, a book that we read, which was around that our customers are Luke, Skywalker, and we're Yoda. And the idea is we're putting our customers on the pedestal. And I think this year and last year it is really that focus about what is maintain reliability about. For us, it's about empowering our customers, they are Luke, in trying to improve reliability and whether or not that is, you know, implementing their own predictive maintenance program in-house, whether or not that's utilising our team and their knowledge within predictive maintenance, whether that's the precision alignment stuff that Harry and his kind of team are doing, or maybe a repair that now we've got Darren that's working on. The idea is this year, and what the new website is trying to uh emphasize is that we are that kind of when when someone says, Well, what is maintain reliability? You know, we are a business that goes in and we do try and improve your reliability.

SPEAKER_01:

We go in and do the hard thing.

SPEAKER_00:

It's not we're not just a data collection company that puts a report out, we're not just uh an engineering company that repairs motors in or whatever or sell you loads of sensors, you know. We don't want to sell you something that isn't going to improve the plant reliability or add value.

SPEAKER_01:

And that's where I think as well where we're trying to aim it because let's be honest, there is a big skill shortage out now. We we know that. We we understand exactly we've been in the game long enough to know that there's not a massive amount of people, you know, doing reliability, let alone condition monitoring to the level of of understanding failure modes and how we're gonna do it and how do we going to achieve it. So that does mean that there's there's got to be a level of upskilling within the industry. It's 100%. We've recognized that from an early standpoint of us, you know, becoming trainers through Mobius and things like that.

SPEAKER_00:

That's why we we've ultimately, you know, when people say, Why'd you do the podcast? Why'd you post on LinkedIn? You know, it's not a vanity project. We do it because we want to raise awareness to more people to so it can't just be me and you shouting about reliability or all the other people that are No, and again, this is the reason why that we build the TikTok platform and that you know in the field videos with the Metroids.

SPEAKER_01:

Or the later part of the year. Exactly. And that's just do you know how many ideas I've got for that that we can do actually first person point of view? What does it look like in the field of actually capturing this data? What does defects actually look like? And also why so important the visual observational side of condition monitoring? Because I do generally think people forget the value of what you can see without the data collector when you're doing these observations, you're doing the vibration checks. How many problems do our engineers actually find? Do you know what I mean? A lot more probably than they find on the collector. And let's be honest, that is gonna be the reality of vibration analysis unless your plant is all forwarding down, which I'm not saying some plants aren't. But the majority of the time, when you're collecting data, you know, 80% of the time the stuff that you collect is gonna probably be okay. Do you know what I mean? You're only gonna be monitoring the 20%, but when you're doing that 80% data collection, there's lots of other value that you also get. I think what a lot of people try to fail to recognise and realise is to think that's a waste of time to collect data when you know you've got the information being green. It's not a waste of time. There's lots of things you can do in terms of value-based kind of observations around that machine. I think that's you've got no it's okay as well, haven't you?

SPEAKER_00:

Yeah, and I think these uh as well goes down to when we consider the skill shortage, it's not just about looking at you know, sharing that knowledge with people already within engineering. You know, there's a lot of people that we've had on the training courses that have were engineers and then now transitioning to reliability engineers, but it's also those that are, you know, coming out of school. It's like, you know, I there wasn't like, oh Will, do you want to be a reliability engineer when I left school? It was like that it didn't.

SPEAKER_01:

Well, we didn't really know what it was, did we? And I think you like you said that you had a good example of when you were at British Sugar, that you were doing a lot of the things, you know, the proactive maintenance and the things that you would needed to do to become a reliable plant without even really real recognising it was reliability.

SPEAKER_00:

Yeah, I don't feel we called it reliability.

SPEAKER_01:

Well it's not really called that, is it? It's kind of like a almost like a culture, it's a way of kind of working this in this kind of like it is a culture thing as well in terms of what you what you do, and it is also moulded and shaped of where you've actually started. I mean, you were very fortunate that you had a place that did oil sampling, it did vibration analysis, it did a lot of these things that was just normal to you. Yeah, and a lot of these plants right now we're going to don't have any of this, they don't do regular oil sampling, they don't have vibration analysis, still very highly reactive.

SPEAKER_00:

And and when you look at like that one part of that that comes under the reliability umbrella, which is you know uh uh condition monitoring as a maintenance strategy, and you're like, it's like this part uh of the overall reliability elements, and and when we have you know things going on at the moment with you know pushing of wireless sensors and and and using these technologies, it we have to recognise that that this is uh an AI as well, uh, or all that have a place, but this is such a small part that that isn't you know isn't gonna deliver you any better performance on the plant, it isn't gonna reduce your spend, you know, it's just a you know, it's a cost that is potentially avoided by doing so, and there's a cost of that as well, and we have to make sure we've got to measure that cost, and also, like you've said, we've got to measure the effectiveness of what we're doing in that time as well, and we've got to be able to reduce that that cost element as well, because a lot of the time people are overpaying for condition monitoring, and it doesn't give them any more value. And I think that ties into exactly like our big one of our big pushes this year after the successes towards the latter half of last year, is that the reason that this is happening and the reason why you see us commenting on posts on LinkedIn or sharing this knowledge, it is because you know, the the most engineers are taught how to fix things as in their apprenticeships, they're not taught how to prevent, or we're not teaching reliability, it's not part of the curriculum, and so a lot of the time, you know, the idea of oh, we'll put sensors on everything, or or you just need to do condition monitoring, that isn't enough. But people think it they're doing something because they're not aware of the bigger picture. So, off the successes of the back of last year, and a big focus for us this year is really the delivery of more training, particularly around asset reliability. We've got a training course in nearly three weeks with 10 people on it, but potentially a couple more just waiting to confirm.

SPEAKER_01:

That's the main course, you know, but that also goes to show the the evolution of reliability. It does because I remember when we first did the course and we first said, right, let's let's kind of promote it. We don't really have no numbers to go on it. Do you know what I mean? And obviously, we've since that point been shouting about this, we've been screaming about how people need to understand the whole reliability picture and how it how it all works together, not just the condition monitoring. And now within one year, we've got 10 people, all customers within our networks, really engaged with reliability, really wanting to take it to the next steps. And we've done small pilot projects with a few of them, and we've already managed to save so much money in in these small efforts that we're doing. So you have to think of reliability as that thing, is how much effort can we put in to the biggest maximum output? That's how it needs to be measured.

SPEAKER_00:

You will by far save more money and be more effective by implementing reliability as a whole and working on the different elements that come into reliability than just doing predictive maintenance. Of course.

SPEAKER_01:

Because remember, predictive maintenance is only based upon the fact that you've got sound maintenance around that that is able to be able to take that information and data and actually correct it. And this is also another thing that we've we've got within the UK, we've got the tick box of old vibration analysis being done, but how many times do you see these surveys being carried out and the recommendations not? And you know, you you go in around in circles.

SPEAKER_00:

It boils down to there's there's a big assumption, again, lack of understanding, unfortunately, and that's why we do what we do. In the condition monitoring, if I do that, I'm covered. Uh my maintenance is covered, but there's there's a an understanding that needs to be had around, you know, well, why do we decide that that's the right maintenance strategy? What is the frequency that we should be doing it at? We need to understand that that there's an element of chance that we may not detect the problem. Even if you have got, you know, the best vibration analysis program ever, you've got the best vibration analyst, there is an element of probability that you may miss the defect, you might not test it at the right time, it might randomly fail. I mean, we had it with the blower that we did with one of the biogas sites where from green to red was in like two, three days.

SPEAKER_01:

Yeah, and we actually captured that with uh with MVX system. And and you're right, I mean, even but still, this this is another argument. Even though we caught it and we told the customer there was a huge problem, there's a high step change, they didn't have enough time to intervene to stop the failure anyway. So great that we've known, but where was the actual value in it? Because it still failed. The customer's still taking that cost on, and okay, we've seen the step change, we can see how quickly these things can deteriorate, which is a great understanding, but still it doesn't save you.

SPEAKER_00:

But why did we even monitoring in the first place? It had a design issue, had it had a flaw, yeah. It it it was the design. And again, the design issue CM is not it's gonna identify it, but it's not gonna cure it for it. The the design problem though was there, so so this asset, this blower was regularly failing, and they had concern for it, and they were looking at monitoring it as a as a method of of being more ahead of the curve, shall we say, but they were monitoring it because they were stuck with this asset's inherent reliability because the original design did not take into effect the I the understanding of how it was going to work. So, doing what we call design for reliability, RAMs modeling, all these sort of things that are done. It wasn't designed properly, the reliability wasn't taken into consideration, and so now they're left with this machine that fails regularly, and even if we put the condition monitoring on, it means they couldn't uh do it anyway.

SPEAKER_01:

But that's why reliability is what's needed. Like, we can't just be putting plasters on things to know about the information that's not solving the root cause of the problems that we're we're inheriting. You know, we've got to stop the defects entering the plant in the first place. Like, that's where we've got to start because a lot of people are just blinkers on blind, motors coming in, not testing them, not testing whether they're aligned or not, not testing it for any design flaws when they've come in. We get called out all of the time because there's a compressor that's literally about to blow up, or something's shaking itself completely to pieces. Even when we come in and we deduce, well, we can might be able to understand what the problem is, but the problem is you're too far past the design problem. We're coming in at a late stage within the process of a poor design, and we're trying to put a plaster on something on a broken leg. The design issue has gone too far down the line where these defects are now becoming huge problems, and you can't do condition monitoring that way. That's not the that's not the point of condition monitoring. The point of condition monitoring is to be able to consistently test your assets that are good and to see them gradually go into a state of alarm and and be able to capture them things early, not just put a plaster on broken leg or poor designs that have just inherently been chucked into plants, and and a lot of the time we are just literally going in to tell the customer what they already know. It is that, and it's just oh great, you just give us confirmation that we pretty much know we need to redesign this, yeah. You know what I mean? We're just the experts telling them that they need to do it.

SPEAKER_00:

The challenge is is is is for many of our customers that are on the reliability journey now, it is trying to spread the awareness above and higher into the organization where they can start to understand that. I thought I could hear a phone. I can hear a phone, just not our phones. But we need to understand that on that journey piece, it's got to be led upwards because when we're trying to uh have uh make the right decisions around design, make the right decisions in planning and scheduling, not go and spend thousands of pounds on sensors when they're not needed. We need to make sure that the senior leadership within the business knows that and is understanding that actually, rather than spending a lot of money, that MVX system would have been on that blower, they didn't they didn't go for it in the end, but a lot of money, okay, it's gonna predict the issue, but it's not gonna save them the money. They need to go and take that money and probably a little bit more, redesign it to be able to suit what they need to be able to be done, and then they can go from there. If it warrants it doing it. So when we do proper reliability studies and we understand what maintenance needs to be performed, we would probably have reviewed that and gone, well, actually monitoring it isn't going to reduce our severity or consequence. The only option is a redesign or a mitigation of the strategy. It just goes back to the whole thing that that at the minute, if you go to the majority of the plants that we go to, not all of them, some of them are on the journey, but most people, their maintenance is not developed based on strategy.

SPEAKER_01:

It's not because when you really look at it, if we can try to get to the point where we stop defects coming in, right? That's that's got to be the stop, that's got to be the first point of call. Do you know what I mean? We've got to stop the defects entering the plants in the first place. But that takes a complete new mindset. It does. You have to look at things so differently because it's one of them the same old sayings that why are we spending money on this now? We don't need to. Why are we spending money now to be able to ensure that we're doing the right testing? What why are we spending money on brand new pieces of equipment to be tested to make sure that they're all good and okay? That that type of mindset isn't inherently in maintenance engineers. We we we we generally feel like there's no value there, but then nine times out of ten, if we do that testing there, we stop the defect coming in, we do the acceptance test, we don't accept what we're given, we accept reliability, and then we don't ever have to really then do the condition monitoring anyway. Because it never becomes a problem. Then we are looking at you know doing basic condition monitoring, basic checks, even monthly services. Surveys on things that are designed correctly, are lubricated correctly, are working properly. And then what we're looking for, we're looking for old age. We're looking for assets that live actually to their life expectancy. Do you know what I mean? And we're using condition monitoring, but we don't need to be potentially all depends on the failure mode or whatever, but continuously monitoring it because it's got loads of poor design issues. It's backwards. We've got to look at the other way of doing things now. And it's really difficult from us being a condition monitoring company to think this way now. Because the way that we were driven, well it will, the way that we were driven is find a problem, find more problems, find problems, find problems. That's what we were based, our KPIs were based upon. How many problems can we find? Because that's almost how you have to kind of correlate your value that you're there to find your. We detected this. Detected that. And we had to start thinking differently. We had to start pushing the envelope out to be a different company. Like, why would you want to tell customers how to solve the forever fix if the values that they think they've got you back in is to find them? You know, we're we're the ones breaking the mould of being what we were born to be as CM engineers into this game, to be then evolving into reliability engineers to say there's a better way of doing this.

SPEAKER_00:

And I think as well, like there's an understanding that you know monitoring machines on their condition, predictive maintenance, CBM, whatever you want to call it, is a sound strategy on the most case. A lot of machines randomly fail. How you acquire that data in today's world, yes, there's a skill shortage, and yes, we're trying to train people, but it is still very effective for manual data collection. Hugely effective. With the way that technology is evolving, having more data, without a doubt, it is handy. If we have more data, it's very useful. Actually, could probably make more wiser decisions potentially, you know, depending on what we're looking at. I don't I don't disagree with that. 100%. Having more data is very valuable. Now, you have two ways of gathering more data, send more people more regularly. Okay, that's quite expensive. Wireless sensors, okay, you can gather the data more frequently, or fixed sensors. Now, traditionally, wireless sensors have its limitations, in it doesn't know if the machine's running, it it doesn't under have it just uh time interval, etc. However, the implementation cost has generally been a little bit more cost effective. Whereas in a lot of the discussions we've been having with some of our uh partners we've working with in towards the latter half of the year, it's actually getting very close now where a wired sensor, which has been you know, piezoelectric, 100 millivolt per G, the standard for many, many, many years. The cost to monitor per sensor is very similar to wireless, if not even MemSpace, which is a lower cost sensor. So the only negative, shall we say, with vibration, then with monitoring with fixed sensors, is that you have to wire the sensor in versus wireless. But we know that wireless has limitations with number of sensors and la la la la. Well, when we consider all the other parameters that we monitor within a plant, flow rates, everything coming into the control system that's giving the operators all this fee to know that the plant is operating correctly and to the right performance. Well, when we design plants going forward, maybe we should be considering the fact that actually we want to retrieve all the asset health data. And if the cost is at a point that is a tangible and achievable in the future where we can take all that data and we can process it in machine learning and artificial intelligence, I think we're gonna be using trusted methods of wired sensors. It are how my gut instinct to them is.

SPEAKER_01:

I agree, and I think as well when you look at the limitations of how things are processed and made, this is the this is the major issue. You don't every single asset on that plant is in 24-7 running in the way it is. You know, some plants operate that way, but you know, the majority that we go to, there's a very high percentage of assets that just the the model doesn't fit that. Doesn't fit like it's even difficult to take handheld data on some of this stuff, you know. So, and not even that, varying loads, varying changes, and as well, once you've got more information of a more of a variation, then it doesn't give you any more clarity, does it? This is what I'm talking about. More information needs to come with extra better decision making. And if it doesn't, where's the value? And the majority of equipment, the way it runs, it doesn't give us that. It actually gives us more complexivity, gives us more to look at, which doesn't give us the clarity. And it all goes back down to the saying we're drowning information and we're suffocating for wisdom, right? We need information, and I'm not saying that wireless data is poor. Like we've got I put a post out the other day at BWSE, we've got the FD, we've got the ID firm, we've got you know some of the major equipments just on some sparrows. Doesn't matter, it doesn't mean we don't take them off with VA when we're there though. No, we don't, we've got a whole other route to do.

SPEAKER_00:

But as well at BWSC 24-7, we've done the tests, we went there, we trialled the sparrows to show that the load variability wasn't there. We didn't just go, oh, here's a load of sparrows. Yeah, we tried it. We trialled it, we made sure that the loads didn't change that much. We wouldn't go put them on the stokers.

SPEAKER_01:

No, because they the load is like changes and it is yeah, and it that data we might have, but it doesn't give us clarity. It's just info that doesn't give us anything because when we do get to alarm, well, was it loading high or was it a big impact? That's why we need to be by the machine when we're doing the stokers to make sure when we're testing, we can be there, we can see the machines move, we know it's not impact on one of the gears or the sprockets moving quite viciously, we know it's not a low variation. We're there as humans to control the element of data collection, and this is the problem. You can't do that with a sensor. You just got information that come in.

SPEAKER_00:

You can't we want to be as confident as possible in the recommendations we're putting forward. We don't want to be well, we're we're not at the machine, so it could can you just check this, or it could be that, or it could be this. And was the load changing when that happened? Can you just check back in your process day?

SPEAKER_01:

Like, we don't want to be doing that where we don't have to, and as well, you know, you look at BWSC, perfect example. We've been doing that contract three, four years, three years, you know, they've got some real big equipment. The the main strategy there is root-based monthly VA. We've identified probably about six or seven major problems quite easily with handheld data collection, right? Yep. We've not had a single touch with, I hate to say it, but you know, you've got a touch with. We haven't had a single thing fail on us there because we haven't had enough information through monthly data collection. And that's even even I I know we've got, you know, four readings a day, roughly on the sparrows on, you know, the it's not continuous by any means of the stretch, but we've got continuous, not continuous, but regular data coming in on them other senses, but it hasn't actually really given us any more value, really, if I'm being honest. It's nice to see the trends, and it's nice to see that. And I can understand why the client wants it, because they can see a bit of a better representation in terms of protecting the asset, but it hasn't actually necessarily yet give us any more information than what we would need to on that on the monthly handhold.

SPEAKER_00:

When you consider when when you're if you're doing an RCM study or whatever on an asset and you're determining that it needs to be an unconditioned task, you have to say to yourself, well, what's my P2F interval? How frequently do I need to gather the data? Exactly. And what other technologies that might be able to tell me this a bit earlier than what I think is.

SPEAKER_01:

There's very little there, though.

SPEAKER_00:

There's very little there, or there's not a lot of times where a month-by-month reading isn't enough. So when we do consider things like wireless sensors and uh even continuous monitoring, unless it's specifically critical and requires it, these are nice to have, not essential.

SPEAKER_01:

And I think what we've got to get right, and don't get me wrong, root-based data collection can be done poorly as well. But I'm not saying that. Like it's down to the strategy and what you're doing, and also the engineer that's doing it. Are they are they being vigilant? Are they looking at things? Are they just taking the day and like uh you know, like you've got you've still got to be a good engineer when you're doing it? Don't get I'm not saying that, but what I'm saying is it's still at this present moment in time, it's gonna be very vital for most industries to adopt that technology to get to the next stage. I'm not saying in the next 10, 15, 20, 30 years that we're not gonna be there. I don't know what's gonna happen in 10, 20 years. But right now, root-based data collection is still the majority of the way that we do our work.

SPEAKER_00:

And I think if if when we're looking at like when we say, okay, it's a nice to have, it's nice to have more data, we can argue that having more data we can be a bit more confident in the diagnostics. But really, the companies that we're working with that are, shall we say, more interested in having more data, they're interested in having more data because they want to take that data, harvest it, and put it into systems to learn. And I think if you are in that that um that zone or your business is in that element where you're interested in in gathering lots of data, not necessarily because it's the right thing to do, but you want it because it's a nice to have and you want to be uh leading the the stu uh uh the journey of AI and machine learning, 100% can do that. I just personally don't feel that's with wireless sensors when all other data is acquired in a factory with a wired sensor on the machine, a flow sensor in the pipe work, when at the minute, wireless technologies it evolves incredibly rapidly, and what businesses want more than anything, and and Mars, who we do some work with, they are very there are the site that we're working with. We've had lots and lots of discussions on this, and there's reluctance around wireless sensors because of the nature of how quick the software evolves. Whereas a 100 millivolt per G sensor that is wired into account.

SPEAKER_01:

Well, I'll be honest, mate, we're still testing we're still testing some of some of these sensors that have been installed from 10, 20 years. Two years ago. And they still give great reliable results. So when you're looking at reliability of data, and that's another thing, how reliable is the data, let alone, you know what I mean? How reliable is that data? How reliable is that signal?

SPEAKER_00:

You know that if you've got a wide sensor and it's been looked after, it's gonna give you a good set of data. There's nothing that's gonna stop that. If it's wired into a control panel with a an MVX or or or an I, whatever system is monitoring it, these are robust sets of hardware, just like your you know, Rio cabinets or control cabinets that are recording process parameters, it is robust hardware. You haven't got to worry about data drop-off, you haven't got to worry about SIM cards, you haven't got to worry about it.

SPEAKER_01:

Where sensor data and how much you've got to send and the battery life and all these other things.

SPEAKER_00:

And I just think, you know, if if I do think that in 20, 30 years, we're gonna be in a really interesting place where lots of data and and and our asset health that maybe we're, you know, got pulling graphics or we're wearing meta glasses with whatever it looks like, right? We we know where the type business, you know what I mean? But you know, data is data, it is the way that we acquire that data. Having more data than once per month generally is a nice to have. It won't make you detect the defect anymore. But if you're an organization that is trying to lead in terms of AI and machine learning, you do need to evaluate what is the best method or the most um what's the word I'm looking for? Long-term is not the right. What's the most what's gonna give you the most long-term results from that information?

SPEAKER_01:

100%. And and that's where we need to be at because already customers now are spending a packet load of having more, what's not gonna give them any more value. And that money needs to be distributed to reliability, to design, to be able to get reliability engineers in, to be able to see the strategy. This money doesn't need to be kind of wasted in the area that's not going to give them any more value. But what we can do at maintain is really quickly ascertain where that balance is, where's them gaps? Like, you know, you spend a lot of money over here, but you've got no plan of scheduler. What the what we're doing? You know, some of these things are quite obvious for us to see as well from where we're looking at it. But it's it's hard sometimes when you're in the organization because you don't see it, you know, you're in it. But what we can do is quite quickly understand very quickly, actually, where that money's being spent and utilised, where that budget's being spent, and are we spending the right amount of money in the right places? Because if you've got your strategy that is is is in balance in that sense, you're not gonna get the results. And the problem is as time goes on and you don't get the results, and you're spending more money on the things, what you're gonna get is kickback from spending more money on the right things, you're never gonna be in the right place in either.

SPEAKER_00:

We've had it, you know, we've had it where we've gone to a site, you know, they're very reactive. You can see that they're running around all over the place, and they want to quote for wireless sensors on all of the machines, but they have no planning and scheduler, they have no condition monitoring in place whatsoever, they don't install their bearings with precision, and they don't do any of the laser alignment, and they just do everything with a straight edge and rule, but they're willing to spend thousands of pounds of their budget to put sensors on everything that is gonna give them nothing.

SPEAKER_01:

Again, it's the same analogy, isn't it? You you're buying loads of plaster to put over the wounds that you can't heal with the plaster. You've got to look at the way you're doing things, and this is a really hard thing for guys. It's the culture, it's the culture, it's understanding what is required and needed, and also a lot of people struggle to spend money on things they can't see that's tangible. When you buy a sensor, you can see that nice pretty sensor on your machine. Yep. But is it giving you any value though? You know, you can buy a famica study off someone, right? If you don't implement it, you don't look at the machines and put any maintenance actions behind it, or there's no one to do them things after you've done the study. Then what's the point in doing the study? You could get a nice, pretty report, a nice, pretty vibration report that could be put onto your desk and it might look lovely. I have all this information in. But if you're not gonna do anything with it, it's a pretty piece of paper. This is the problem with data. Is it being uh harvested to have something meaningful that's gonna improve the way you're doing things, or is it giving you more fluff, noise? Where's the noise? Where's your clarity? All right, and don't get me wrong, we're a big believer in data. We've we've we've got loads of online systems on loads of different things. We're not saying that it's not the way forward, we're not saying that. But what we've got to do is be able to look at as a realistic and reality point of view, where do we spend our money most effectively? That's what good reliability practice is. It's a bit like your own bills. Where am I spending all my money? Do you know what I mean? Am I being effective where I'm spending it? Am I getting the most out of my gym? Like, might be spending lows on a gym membership and not going ever. Well, that is that effective. Do you know what I mean? You might have a whoop band, you might be getting all this information and all this data in. Am I using it? I don't even look at the app. It's the same thing as a condition monitoring system, innit? Until you do any actions that are built from that data input or what you get out of it, and if you don't change anything and you don't do anything with it, then it's valueless. Yep. We've got to look at the way that we how do we make this data valuable? And I'm not saying it's not don't ever put an online system, it might be really valuable to do that, but we've got to look at the failure modes behind it, the reasons behind it, the strategy behind it, and the sound strategy that we've got to think, we've got to think critically about the way that we're doing reliability.

SPEAKER_00:

We we're another really good example, you know. We've got a customer up north that that Tom looks after that I went up and visited for the new year. Uh, and it was it had gotten to a point where they'd added this, they'd added that, they'd added this, and we're just testing every asset under a sun in this site, every rotary valve's doing like 10 RPM, like we're just testing everything. Uh and I went round with Tom and he said, Look, I'm testing these stuff. I know we shouldn't be doing it, but they've just asked us to do it. I and I sat down with uh the engineer manager in the end, and we had a really good conversation on the subject, and I sort of said, Look, we're testing everything in your factory under the sun. Tom's using his days to do all of this. You're not even doing the things that we are telling you to do on the really important stuff. So, and you you you acknowledge the fact that you're that you're very reactive right now, and you acknowledge the fact that there's some elements that we could potentially help with about first and foremost reviewing the maintenance strategy on some of their important machines to make sure that they're doing the right things. I said to him, Well, you can't justify right now a reliability engineer. We need to demonstrate to the business the fact that you need someone dedicated to that, which he knows. I said, Well, rather than just using us to test loads of things to give you a report that you don't really look at because you don't have the time to fix it because you're so reactive, let's just test the really important things that are gonna bite you. The things that when I tell you there's a problem, you're like, oh my goodness, there's a problem. And let's use Tom's time to start the reliability journey, use him as your reliability engineer to start demonstrating the value to the business so you can go and say, Well, look what we achieved with Tom supporting us. I want to go get a reliability engineer. They can come on ARPE course with us and we can start mentoring them to deliver what the plant needs. What maintenance should we be doing? How do we design our assets? Exactly.

SPEAKER_01:

What acceptance testing do we actually even have in to bring in here?

SPEAKER_00:

Who do we repair? I mean, how many Yeah, who's our repair standard? How many how many customers or how many factories get their motors repaired and have actually ever visited the site where they get their mother?

SPEAKER_01:

And what metrics also are you questioning that you get back from the repair standards? Like we've when we built our repair centre, we had to completely reimagine the way that we do repairs. You know, some of that information that some people may think is is not valuable in terms of the test results, and again, it is just data, but also demonstrates that it's been done in a way that is reliable. And and that's also really important. I mean, how many times would people request that information?

SPEAKER_00:

I mean, you were saying, like, is it most repair centres don't test the rotors? No. And and that's most of the defects you've been found in with Mark. A huge amount of defects we find in the rotor. I mean, didn't he find two new motors that got repaired?

SPEAKER_01:

Two brand new motors, they didn't get repaired. But what was really ironic about this, they were here for a carbon fund, so there are ye four motors. So the whole point of it was the fact that we're going to replace two inefficient motors with two efficient motors. It was kind of part of a like a buyback scheme. So basically, it's like a rebate thing, you know, when when you spend X amount of money, you get money back, but you need to use it in a carbon fund. So I'm not against energy efficiency, but I'm also against really poor metrics and really poor kind of pull the wall over your eyes to the fact that when we test these motors, one of the tests that really does identify a motor efficiency is how well balanced is that motor being wound in terms of the electromagnetic field through the three phases. The balance between that is something called impedance balance, right? The more out of balance you've got on one of the phases in terms of where it's being wound, so basically one of the electromagnetic fields might be slightly out of polarity. So it's pulling more out. Pull in more out so it's not centric, will pull a lot more current. And it's it's measured by impedance imbalance. So it's an imbalance kind of point of view, and what it does, it measures all three phases, electromagnetic field balances between each other, right? And when we get when we get a harsh or a high percentage out of balance, it has a huge effect on motor efficiency. Okay. Two of these motors that are supposed to be IE4, right, had higher than 15% impedance in balance. So that meant that these were bought to be efficient, and the ones that we tested that was supposed to come out had under under three percent. So These new ones, even though they IE4 may have 100%. Just because they've been IE4 rated, the impedance imbalance, which could be in the rower, it could be in the stator, again, it depends. It's not quite inherent of where it is. It's just eradicated the benefit. Yeah, 100%. Straight away. But these tests to see again, this is something that me and Mark are doing right now. Repair centres don't have the standards in the acceptance testing that have impedance and balance as a test to be able to warrant an acceptance test. Okay.

SPEAKER_00:

Is that also what um Howard's uh they're trying to put into the standard, aren't they?

SPEAKER_01:

They are, yeah, they're trying to put it in. But it doesn't stop you as a business to say these are my standards. Yeah, this is what I want out of my mower. So when we look at failing modes, we've got to look at okay, when we do buy a new piece of equipment, we need to look at all the failing modes. So when you're looking at the failure mode of the mower, well, we need to understand the failing mode of the rotor. Do you know what I mean? What tests are we doing to demonstrate that that rower that I've got is good to come in my plant? And the problem is no one's doing these tests. So what we're finding is that we're accepting a lot of these motors are coming in because people are not doing the right tests, they're just doing simple basic tests. And five years down the road, you've got an ineffective mower, inefficient mower, and then you get a winding defect, and then that winding doesn't last nearly, you know, 30% of the mower life cycle. And when you're talking about a 50 grand mower, if you've got three or four of them in your site that haven't reached the 20 to 30 years, and they've only reached five or ten, then it's quite a problem, in my eyes, anyway.

SPEAKER_00:

Do you know what I mean? And I think like all these things, I mean, like I'm just now thinking that's just one another element, mate. That's just one small armable. And and I was thinking just then back to um uh Gary came down uh to um Mars with us the other just before Christmas, and we had a really great day there showing us the power of Isograph. And so if anyone's listening and and hasn't seen Isograph before, wow. I I just you know, if you're very powerful tool. If you're going down, wanting to get involved in that reliability journey, looking at reliability-centered maintenance or or trying to determine from your machines how best to maintain them. This tool that we're now looking at getting involved with as well, and and using Gary as a support, it will one of the things that's always very hard in condition monitoring as well and in reliability is being able to go, this is the cost. This is this is how much you're wasting, this is the cost that is costing you to maintain that asset or whatever. And what Isograph allows us to do is look at the machine, see the way it functionally fails, look at the failure modes, we can put in there what is the uh consequences or the effects on production and how much that costs. We put your downtime figures in, we can put your labor cost in, we can put spares in if we have to, and we can say, right, well, this asset, uh, based on you know good engineering judgment, or we might have failure data within the maintenance management system, we can apply that to the failure modes, we can apply the consequences, and we can run what we learn in ARPE, which is around Monte Carlo simulations, and the system will automatically do that and go, well, based on this machine, based on the failure modes, based on all of the information you've provided me, over the 10-year life of this asset, it's going to cost you 1.2 million to maintain it. Your risk factor for safety is this, your risk factor to operations is this, and if you these are your potential maintenance strategies that you can choose from. Run to failure, redesign.

SPEAKER_01:

If you do it in this way, this is how much you save. If you do it this way, this is how much you so you can model out, especially on your real critical assets, and a part of that whole kind of strategy, again, especially with this motor manufacturing, you know, huge, like again, we we're working with British Gipson, big company, and the motors ain't small. Like you're talking huge combustion fans, you're talking, you know, rock mills, you know, the the motors there, you've got a lot of them as well, you know. So motor reliability, essentially, if you think about your process to be able to do the functions on most of them things and how you manufacture, motor is a huge part of the stuff.

SPEAKER_00:

Well, you you take that as an example with Isograph, you're gonna come up with the motor and you can say, well, one of the failure modes is is you know, the winding's gonna go whatever. How are you gonna mitigate that? You've got you've got a few are you gonna just replace the winding on a set time interval? No, are you gonna redesign it or mitigate it? Probably not. So you're gonna have to do some form of on-condition task, or you're gonna just run it to fail.

SPEAKER_01:

Yeah. Or you're gonna try to make sure you at first you've got a good one. And this is this is part of the problem, isn't it? That's that's the that's the thing.

SPEAKER_00:

That comes into defect elimination design.

SPEAKER_01:

And the but that's the thing, though, isn't it? Like a lot of the maintenance things after the defect elimination are only there to be able to prolong the life of what we're doing.

SPEAKER_00:

You look at it from that point of view. Say it's a good motor, you know you've done it from design, you're not gonna redesign it, you're not gonna replace it, you're probably not gonna run it to fail. So you need to do some type of condition check of that, and that's a lot of the work that you imagine. Again, a lot of the work in that is also what we can do within Isograph is we can document all of that. We can say, Okay, well, if the motor did go, it's gonna cost this, okay. Well, we're gonna do an on-condition task, and we're gonna have Will and Mark come once a year to come test our motors, and we're gonna put how much it's gonna cost you to come and do that, and it can do all the calculations and look at all of the numbers, and it can say, well, if you didn't have Will and Mark come in to test this motor, it's gonna cost you this to make to do it, and this is how many failures you're gonna have. It's a prediction, so it's not it's all based on probabilities. Could you also, though, this is another interesting thing.

SPEAKER_01:

So, say, for example, you do a monthly vibration on something, yeah, and then you said, Oh, well, I've got a wireless sensor here. Could you then it understand the implication of how much you'd save extra if you were to monitor it more?

SPEAKER_00:

Yeah, because what you would do is say say you pick that as a vibration as a yeah, I want to do it as a test. So say you had a vibration.

SPEAKER_01:

Okay, well, okay, we've got more data, but it doesn't actually save anything more.

SPEAKER_00:

You could be really clever about it. So what you could do is you could say, well, you know, this is the bearing failure mode's going to bear, and I'm gonna do handheld vibration analysis, and it's got an option for um the uh cost per month of that. So you put your monthly cost for your condition monitoring provider, and then it has detectability score, so you can say how likely you are to detect the failure mode, and you can say, okay, well, maybe I don't know, it's 0.8. You could then do the same test for a wireless sensor or a continuous monitoring. Now, can when you do those types of things, you have there's the capital cost for the sensors and the the cost of installing ongoing cost. Yeah. So you put that for both of those, and then you put detectability. So if you're if if we were doing it as a test, we might actually put wireless sensors as a lower detectability because we know that most of the time they get installed on the wrong machines. Or the wrong places. Or wrong places. Or the fins of the moment. But if you say you were doing it in truth to say no, we're going to do a good install here. You know, you might say, okay, well, actually, you know, it is appropriate to the failure mode. It runs consistently. My detectability, I'll make it a bit higher because we're having more data, so the chances are we might detect it better. You could run all those models and you could look at, well, what would it cost me to have someone collect the data with handheld data collection across the life of the asset and how many failures am I likely to occur versus a wireless system? And you could compare the cost.

SPEAKER_01:

And that would be interesting because I think that would also identify do we need to be spending this much money? I think when it comes down to online systems and creative capital, obviously critical assets, yeah, you've got a case to be able to do that because you want to make sure that the most important assets are you are being monitored in the best way. But when you're looking at a model of saying, right, I want to put hundreds of assets over the same strategy, which a lot of companies are trying to push because that's the only way they make money, because you have to think about it as a scaled-up option. If you're a company that just wants to sell sensors, you've got to sell a lot of them to make it worth your while. I think there's a business way.

SPEAKER_00:

I think you'd find if we did some tests and we can do this and maybe we can share it later. Maybe we'll do a bit of uh data modeling. I think you'll find that generally speaking, is you know, your risks and your number of failures will reduce slightly by having a continuous monitoring or wireless system if it's implemented correctly, but your cost of maintenance, because you've got that ongoing cost, will increase to the point where it's not really worth it.

SPEAKER_01:

Yeah. And that's what we're talking about. Obviously, visibility otherwise. You won't, and that's what we're talking about, is like we're not saying these systems are not good. I mean, would would you prefer be monitoring on a continuous system? Yeah, let's be honest. We're not gonna lie, we're data analysts. We've we've got a lot of these systems out there, but what we're saying is it's not just the one strategy, put loads of wireless centres on and nothing else, and believe that that is the way to go, and believe that that is the strategy from all things, and believe that that is the only way that we are gonna get over this skill shortage. It's a load of pile of crap. That type of talk is dangerous to this industry and undermines everybody that has been doing this for years in the in as a root-based way, and been upscaling the model, setting sensors along the criticalities and the strategies, undermines them people. And the reality is the way that we get over the skill shortage is by empowerment, it's by training, training the right people, having it sustainable within organizations to understand what vibration analysis really means, what it really means. Not oh, we're just gonna collect loads of data. Like, what does it mean? How does it have a positive impact on our business and our maintenance strategies? Where is its limitations? Then things need to be taught. We can do that very well. We can empower people to do that. Bring people into the experience of it, doing it, going out, collecting data with the observations, with their PMs and stuff like that. And remember, if you want a good robust maintenance strategy, you need consistency. So consistent good practice with good training is how we get over the skills gap.

SPEAKER_00:

Even with the AI and the machine learning and all the exciting things that's coming with uh generative AI and stuff. Funny enough, AI isn't going to be able to go change a bearing or laser aligner fan anytime soon.

SPEAKER_01:

No, it's not, but not even that. You still can utilize this data for AI, even if you're doing root-based. You can still extrapolate information that you input into a CMMS to keep building that information. So it does mean something. You can use that information. Just because it's not being collected by an automatic sensor doesn't mean that data can't be used for AI models.

SPEAKER_00:

The challenge is I think that most of the bigger organizations that are leading it recognise that they know they're trying to bring all their data together in one place and it's not saying that's not valuable. I think a lot of the time that the sufferers from uh the push in the wrong direction, shall we say, are the middle customers. Yeah, that are buying into that oh, it can solve all my problems.

SPEAKER_01:

Great, we're just gonna spend loads of money on this, and instantly everything will go away. But also, there is been a lot of misinformation being sold out there, there's a lot of false narratives being pushed to be able to suit business models. And I I understand why people are doing it, because there's money to be made. But the reality is you can't pull the wool over people's eyes in the industry who've been doing it too many years. And I think that's it. You know, they've been there, they've done it.

SPEAKER_00:

I think if you're in a situation where maybe you're being pushed by senior leadership to move down this model, I think if you can go back to them and say, Well, maybe before we do this, I I need to do an asset reliability training, or you need to come on so that you can be equipped maybe with the more knowledge to go back to the business and go, appreciate that this sounds really good right now, but we've got a lot of other gaps right now.

SPEAKER_01:

We have got so many other things that we need to work on first. So this can become valuable because a lot of the time they're investing money in the wrong area. You spend all that money of your budget on that, and you don't have a planner schedule, you don't have maintenance teams within your organization, you haven't got people doing meaningful PMs, you've got loads of data being driven in, and no one actually doing any proactive maintenance. That's that's going to prevent these failures from happening. So, again, it's all about what gaps have you got, where actually are you in your journey, and we can really help you guys see where that is. Yep.

SPEAKER_00:

I think the clarity that's the going back to the whole thing we discussed. Maintain, isn't it? That is us is how do we share the knowledge, how do we do it on the podcast?

SPEAKER_01:

Empower people, yeah. Keep sharing the knowledge, keep sharing the awareness, keep keep being on the same drum so people realise that we're not just here, it's not there's nothing that we actually do sell, we're just trying to see where your gaps are and try to help you and empower you to be able to go and fill these gaps, to be able to be have the most effective maintenance strategies that you've that you can implement. And you know, when you do that, it's difficult because you haven't got a business driver, you haven't, you haven't got a key KPI metric sale. We need to sell 20 centres a month or hundreds of centres. It's not like that. What we're doing is selling knowledge and empowering people, and it's hard, but we know it's what the industry needs because if the industry embraces this and embraces the training, embraces the awareness, quite quickly we can turn things around. Look at some of the customers that we've been working with the last two years. Look how amazing these journeys have been. Yep. You know, and we're not even fully there yet, but look at the impact we've made by training people, by getting people involved. And look how passionate they are, look how much they love their job because they can see it making a difference.

SPEAKER_00:

There's a you know, you know, I said it to everyone that I do the training with with asset reliability. I think that we have a very good reliability community in the UK, and that was really demonstrated at mainstream last year. 100%. Um and it's a great team of people because all we want is the best for the plants that we work within. All we want is to be able to maximize their performance. It's just making sure, as you said, the focus is on the right areas, not the shiny objects.

SPEAKER_01:

Shiny objects on the right areas. I'm not saying the shiny objects don't have the value, and I'm not saying we know we won't introduce some of them systems because sometimes they are really valuable, but they've got to be done in a way that fills the gaps of the places where the business needs to go first so they see the maximum value. And that can be a challenge. I'm not saying it's easy, but what we can do is work together as a team because we don't do it individually, either do you, either the sites that we work with. We work together, we work together as a team to get to them outcomes, and it's not always overnight, it's not. But when we're heading towards the right direction and everyone's aligned with the vision and everybody's seeing it, that's when real magic happens, mate. 100%. Definitely. So on that note, thank you for everyone that's tuning in. It's gonna be exciting 2026 to say the least, isn't it, mate? And uh we're gonna keep on spreading the awareness. But please go follow the new TikTok page, follow the new socials. We're gonna be posting a lot more this year, aren't we? We've made some real good kind of goals and stuff for us to both to be posting out there and just keep spreading this knowledge and awareness because let's be honest, it's what made maintain reliability, hasn't it? So cheers for tuning in, guys. Catch up with you next time.