Skip to content
tpierrain edited this page Oct 9, 2014 · 23 revisions

Day 5 (September 1st 2014)

A Logbook entry written by @Cyrdup

Thomas being out today, Ozgur, Tomasz and myself started by a quick retrospective of previous work in order to decide what to do next. We discussed whether unit tests were necessary for our pseudo-market class, as they were not a significant output; but they are also critical to the design of the system, and TDD is an excellent method to build an API. Therefore we added a specific test for partial execution.

We then adjusted the smart routing engine algorithm to green the second scenario, which bring real added value to the system by ensuring the system sends an order on each market until we reached the target quantity.

Once implemented, the test started red, as expected. We changed the algorithm to enable partial execution.

Alas the new scenario remained solidly red, getting a NulRefException when checking that we did execute the expected quantity; a clear sign that something was amiss, with no execution happening at all. Meanwhile, the first test remained insolently green, demonstrating no regression.

We unanimously decided against debugging the test and focused on code review. A couple of minutes later, I realized that a notification condition needed to be relaxed as a consequence of the change.

Still red!!. But no NulRefException this time.NFluent clearly showed the executed quantity was above our target. This time analysis was easy: our algorithm was looking for the full quantity on every market. This one was expected, as we did a dummy implementation.

But something bothered us as well: the first test was still green. But we were positively sure to have introduced regression.

This time, it was Ozgur who realized the conditions of the first test were slightly different, as the price on market B was above the desired target. Once the conditions were made similar, both tests were red.

As they, don't trust a test you have not seen red 👍

Once the proper quantity decrementation logic was in place, our test was finaly green. Which means we have delivered value to our customer.

Therefore, we are contemplating adding boilerplate code on top of that so we can have a scenario runner. This approach would allow us to describe a benchmark scenario in some file format, and have it run through the system. Tomasz rightfully pointed out this would help us track our progress.

To be continued...


Day 6 (September 2nd 2014)

A Logbook entry written by @tpierrain

Today's session was without Ozgur (unfortunately tied to production troubleshooting), and started with a very short catch-up for me (since I have missed the session of the day before). Very short indeed, thanks to the previous logbook entry written by Cyrille yesterday.

Anyway, we started this session by deciding what to do as the next step. Like Raylan GIVENS used to do with the bandits, we left ourselves 2 options:

  1. To continue on the functional path, by adding a new feature to force us to introduce Market Data as inputs for our SOR (the ultimate missing integration point if you remember the diagram drawn at the beginning)

  2. To start building an acceptance test tool (including performance runs), that will allow us to start measuring our SOR, and to run the same way whatever the technical implementation we will put "inside" for comparisons (e.g.: F#, C#-the-LMAX-way, C#- the-Michonne-way, etc.). Ok, our SOR doesn't do much a thing today, but the idea here is to start building iteratively our measurement and tests tools

After a short discussion, we decided to follow the functional path in order to have all integration points ready before we automate our stubs (for markets, for investors, etc) & we implement various technical strategies.

We wanted to add Market Data support -sure- but we needed a real use case that Justified this introduction. This is where Cyrille suggested to add a test showing that the SOR is able to weight average execution on the various markets that offer the expected price. Functionally speaking, this feature allows the SOR to weight average the odds for execution by not putting all its execution eggs on the same market (useful if we face difficulties to execute on that market).

Ok then: we added this 3rd acceptance test (RED as expected) and updated the SOR implementation to make it GREEN. Our SOR.Route() method was now:

  1. Capturing/snapshoting all available quantities for the requested price (or better) across all the markets to infer a weight average ratio for every market.

  2. Splitting the original investor instruction into sub-Limit Orders (a kind of order book) routed on the various markets (following our computed ratio)

  3. Raising the Executed event once all sub Limit Orders are executed

We implemented that 3-steps strategy within the Route() method, but our test was still RED (all our tests in fact). We tried to figure the reasons why. We all felt that it was a rounding or casting issue, but it was very difficult for us to confirm those intuitions. And none of our changes within the code lead the tests to GREEN...

Exasperated, Cyrille left the room ... (just kidding, he had another meeting to attend ;-)

Which let Tomasz and I with 3 options:

  1. To read the code and to find why (i.e. to debug in our heads)

  2. To debug one of our acceptance test

  3. To extract our routing strategy and to unit test it separately

Since we only had 10 minutes left, we finally decided to debug and find the explanation of our problem very shortly: the ratio we computed to determine which quantity per market for Limit Orders creation was computed the rookie way: ok we had declared a decimal ratio, but by we simply divided 2 integers to compute it. Sad Panda...

Just enough time to fix all our acceptance tests in one move, and Ding Dong!. End of the session.

Today we RED -ed and we GREEN -ed. Next session will probably start with the REFACTOR -ing of our SOR.Route() method (with lots of code smells inside).

After that refactoring, we will be ready to initiate our (performance) acceptance test harness, and to start digging asynchronous implementation & various technical strategies.

But let's discover what the future holds for us in other sessions.


Day 7 (September 8th 2014)

A Logbook entry written by @tpierrain

We said it last week: this monday we had to refactor our code. And this was the topic of our day with Cyrille and Ozgur, but without Tomasz this time (on holidays during the upcoming weeks).

If you remember what we did last summer (last week in fact), the SmartRoutingEngine.Route() method was enormous, with too many responsibilities in it (like its SmartRoutingEngine type in general), and with some crappy event handler code inside. Enough! and time for action ;-)

Splitting the Routing method

We decided to start splitting the routing of an InvestorInstruction into various steps:

  1. Digest the Investor instruction

  2. Prepare a corresponding basket of orders to answer to his need (through a "solver" that is also an observer of market data)

  3. Send the orders and monitor them (e.g.: for execution)

  4. Feedback the investor

Because the Order Book preparation concern and the Order "creation" concern were interlaced within our previous code, we had to introduce the concept of OrderInstruction (for the step 2.), a kind of TODO to create an order afterwards (in the step 3.). Doing so, we added the concept of an Order Basket, even if it was not through a dedicated type yet, but a IEnumerable variable instead (i.e. a kind of TODO list).

Introducing an InvestorInstructionSolver (strategy)

Our previous execution strategy being a "MarketSweep", we introduced then the concept of an IInvestorInstructionSolver for the step 2. job, and a MarketSweepSolver as the first implementation (with our previous code extracted).

Making the Market aggregated by the Order

While we were continuing to improve the SOREngine.Route() method, we decided to replace the call to:

market.Send(limitOrder);

with a more logical:

limitOrder.Send();

Indeed, the Order being instantiated from a market instance, it makes no sense to allow an order created within a market A, to be send on a market B.

Making the SOREngine capture the execution at the order level

And not the market anymore, was our ultimate event plumbing step in this refactoring session.

Our refactoring todo list has still pending items (e.g.: to transform the IEnumerable to an IOrder composite, to test/review all our events unsubscription, ...), but let it wait till the next session.

And maybe we will be able to start digging around the next development tasks (i.e. to add acceptance tests with market rejection for orders, to implement the asynchronous nature of the markets, and to build our global acceptance tests harness).

TBC...


Day 8 (September 10th)

A Logbook entry written by @tpierrain

Tomasz out of the office, Ozgur busy (with a release), Cyrille preempted by his boss for a lunch... I initially thought that I brought my laptop for nothing today...

But when Cyrille got back at his desk at 1PM, we finally decided to pair together on the SOR -even if it was for a coding session of 45mn only.

Thus no big deal today; we continued our previous refactoring session with the extraction of a new OrderBasket type (replacing our former IEnumerable variables).

Doing so, we continued to simplify our code, and to reduce the SmartOrderRoutingEngine (a.k.a. SOREngine) responsibilities.

Indeed, our new (IOrder "composite") OrderBasket type once created - and returned by the solver - offers us a single point of contact to Send() all its aggregated orders, or to subscribe to all their OrderExecuted events in one call only.

After a code review to ensure that we didn't missed any event unsubscription, we fix all the StyleCop issues we had, and Ding-Dong! Time to get back to our respective work.

Short, but fun ;-)

Day 9 (October 1st)

A Logbook entry written by @Cyrdup

Thomas is in Chicago this week, bu the rest of the team is here, and we welcome a new member: @MendelMonteiro, who did bring the laptop today. Having a new member, and coming back to the topic after a three weeks break mean we spent a significant part of our time discussing what have been done so far.

We then started addressing our first item in the backlog, which was handling potential order rejection from the market. Indeed, we worked so far in an ideal world where our orders were always execute, which is far from being true. The notification mechanism for failure was to be a C# event or a callback; there was a long discussion on how many events/callbacks was appropriate. We know that we will have different notifications: order execution (partial or total), order cancellation, order rejection.

YAGNI

We settle for a failure callback for the time being. Knowing we used an event for execution notification, it will be an experimentation. Of course, the idea is to choose a single notification API in the end.

So we started by adjusting the existing test around order rejection (LargeMarketOrderShouldTriggerException), which was initialy exception based. We barely had time to make rewrite code to factor for the new Order.Send signature, make the test green. !DING! end of session. Yes, we can commit working code. But this is clearly work in progress.

SEE YOU NEXT WEEK