Modernization
Micki worked for ConCom, a huge multinational development consulting company with offices on four continents. ConCom, in turn, assigned Micki's team to another multinational corporation, one that was looking for an ERP upgrade. Picture six developers in one little loft office, complete with dartboard, while the architects, POs, SMs, and the like were on another continent in a totally different time zone. At first they worked small tasks, proving themselves capable of being assigned the big upgrade project, and eventually, details started to come out of what the client really wanted.
The current ERP was fairly unremarkable, if entirely dated. The bulk of the operations were scheduled jobs that processed CSV files uploaded from the branches to the central server. This system was what they intended to upgrade: ideally, with real-time messaging using a "top of class messaging broker and middleware." But the catch was, as always, they had to do the upgrade without any down time for the ERP system. So they had to do it in phases, one CSV at a time. Still, it should be an easy win, right? They would work on the branch side, and a central team would do the central server side. Easy peasy.
But of course, nothing is ever that simple here at TDWTF.
The first segment to be upgraded dealt with contracts. The branch would upload a "new contracts" file, and the central server would reply with a "processed and rejected contracts" file when it got done processing them. This all had to happen by 7:30 AM so that the information would be ready when the branch opened next. This should have been an easy replacement: both files had a well defined format, making it easy to create real-time communication around it.
The catch? The central ERP knew nothing of CSV files at all. It only dealt with XML. So how were these files even being processed?
As it turns out, there was another piece of software involved: a Java application from 1996, written by a developer who had long since retired, that handled the CSV to XML conversions and back. Thankfully, the source code was discovered, so they could at least read the comments. And try to search for the database it was using, since nobody knew it even existed, let alone where it lived.
After weeks of studying and documenting how the ERP worked, suddenly, the development team were called (after hours) into a meeting with the PO, SM, Architects, Client Directors, and C-level execs. The project was touted as the "biggest project in the last 10 years" that would "save the company millions", and development would start tomorrow. The deadline? 3 weeks, or 250 man-hours. The entirety of their study findings had been ignored in favor of the "analysis" of an architect on another continent who never even spoke to the dev team, and never showed up again.
But it was time to implement! And that meant setting up the message broker. Which the team had been given no details on, not even the name of the broker. When they complained, one of the developers was assigned to "remove all these blockers" instead of working on features. He tried; to his credit, he called just about everyone he could think of, signed a few NDAs, and attained a new security clearance. And he finally received it: a little documentation, access to the broker, and the message layouts. No documentation, no diagram, just the layouts; they had to guess how to map the CSV files to the message layouts they were given, as well as whether each one was a request or a response. They had incredibly helpful names, too: "AB", "ABC", "PB", "PBC", and "Rejected." It was clear what "Rejected" was, but the rest? They had to find out through trial and error that "ABC" was the reply to "AB" and the same for "PBC" and "PB."
A few weeks went by-the project now officially late-and they got one more piece of information: there was a sixth message type, called "GC." Was this a message, since it had only two letters? Or a response, since it ended in "C?" Nobody knew. They assumed since it had a "C" they would receive it from the central server at some point, but they couldn't figure out the circumstances under which it would be sent. So they forged ahead, ignoring it, as you do.
Another month of long, hard hours, and they were at the point where they could review and test the implementation. They provided AB and PB messages, based on the business rules to determine which would be sent, and expected a reply from the central server. This would be processed and displayed in a brand-new UI for the end user to review. So far, so good.
Then it happened. The Call. The three-hour conference call with the central team to sync up on how to roll out the implementation. This is when new rules were discovered, including:
- ABC messages can be received before AB messages are sent
- PBC messages can be received before PB messages are sent
- GC messages will only be published by a single ERP instance, running on another continent, handling contracts on a single, very specific country. All AB and PB messages from the same instance MUST wait for the GC message. (Daily? Weekly? Monthly? No idea.)
- Rejected messages can be published on two complete different scenarios:
- Automatic rejections, where the ERP did not understand the message. In this case, the rejection means that the original message cannot be processed without human intervention (bug fix and/or database normalization).
- Manual rejection, which may be published before (or after) a ABC or PBC message. If it were published before, it should be ignored when an ABC or PBC is received. If it were received after, then it should cancel and revert all processing from the respective ABC or PBC message.
- Sometimes an AB message will be replied by a PBC message. And a PB message may be replied by a ABC message
Micki had to ask, finally: "WHY." Why on Earth were these insane business rules in place? What the heck was the process? Well, it turns out, they designed the flow to mimic the exact process that the manual processing of the CSV files had done. Manual rejection meant that a human went into the ERP and rejected a contract, which could be done at any point in the process. A reply could be received before a request because a human operator in the central branch might be on the phone with a human in the branch and send him a contract based on what they discussed instead of him sending it to the central branch. The software would need to infer, based on the incoming message, all the fields it should have if there were a request it was based on.
Let's not forget, this was just the first leg on a huge project. All integration based on file will be modernized to events. Not long after this, Micki moved to greener pastures. Remember that this project had a 3-week deadline with 250 man-hours? When Micki left, they were at 2,500 man-hours, greatly reduced scope, and no end in sight. As far as Micki knows, they're still at it today.
[Advertisement] ProGet's got you covered with security and access controls on your NuGet feeds. Learn more.