Dear Desktop Engineering Reader:
So, we were using a creaky, 20-year old Unix system with its sluggish old SGML-driven word processor. No layouts, no pics, no visible font effects, and a lot of brackets and pauses. The boss tells me to suss out a new desktop publishing system. The company adopts my plan. But — isn’t there always? — one of the editors goes all Bartleby the Scrivener on me. She preferred not to migrate to the new desktop publishing environment. So she ignored it and kept working on the old system a year longer than anyone else. We finally ripped the old terminal off of her desk in a scene out of “Nightmare on Elm Street.”
My colleague much preferred to stick with the tired, tried, and true, even though it required a line of coding for what a keystroke could do in MS Word and took minutes to call up a file. It worked after all. Everyone else needed to cope. Yes, and you can commute to work on a burro too.
A lot of companies are in a similar, although rational, state of intransigence. They have a first or second generation PLM (product lifecycle management) system in place. There’s a lot of time, money, and ego invested in it. It’s difficult to accept that an enterprise system’s time has passed. Maybe more precisely, it’s difficult to accept that it’s passing away your time and passing out your money. It still works after all.
The simple fact of it is that the performance of your old PLM system can be deadweight on your productivity and profitability. Your CAD file management overhead and your processes are likely more complex than they were a handful of years ago. Your files and associated metadata have, are, and will grow dramatically. And that ever-increasing load can lead to a creeping drag on your efficiency, costing you lots of money in insidious ways. You know, a few more seconds waiting to load or fail to load up a file at every desk and pretty soon you’re talking real time and money.
Today’s Check It Out link takes you to a little gem of a piece on PLM productivity. It’s a spot-on piece of work.
“Boosting Performance Using Next Generation PLM from Aras” is a four-page white paper, written by T-Systems International, a German global IT services and consulting company. Its basic thesis is that the performance of an enterprise-wide system is a critical aspect of software. Without quick and responsive performance, users get frustrated, potential productivity gains get lost, and your RIO (return on investment) diluted.
What T-Systems reports on here are the performance improvements achieved after replacing an operationally proven and well-maintained PLM system with a modern generation PLM system. The replaced system was based on high-end database technology and Unix hardware, and it had been optimized for more than 10-years. The paper runs through the scope of the migration project as well as the old and new hardware and software set-ups.
What’s neat is that the paper really focuses on just one key aspect of the old PLM environment vs. the new Aras PLM environment: loading a large assembly. Large means more than 1,000 CAD objects. Six different assembly structures ranging in size from 50MB to 450MB were loaded and observed at least a couple of times on the old and new environments. Some of the files wouldn’t load on the old setup, and all of the files loaded a lot faster with the new setup.
Lots of interdependent factors, of course, affect your performance — like your choices of hardware, databases, and network configurations as well as any customizations you’ve made. And your mileage may differ on some professionally executed performance tests. But none of that takes away from the fact that this is a fascinating snapshot into the world of difference between the performance of an old-generation PLM system and its modern counterpart. Hit the link and see for yourself.
Thanks, Pal. – Lockwood
Anthony J. Lockwood
Editor at Large, Desktop Engineering