Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Microsoft Linux

London Stock Exchange Rejects .NET For Open Source 498

ChiefMonkeyGrinder writes "This summer, the London Stock Exchange decided to move away from its Microsoft .Net-based trading platform, TradElect. Instead, they'll be using the GNU/Linux-based MillenniumIT system. The switch is a pretty savage indictment of the costs of a complex .Net system. The GNU/Linux-based software is also faster, and offers several other major benefits. The details provide some fascinating insights into the world of very high performance — and very expensive — enterprise systems. ... [R]ather than being just any old deal that Microsoft happened to lose, this really is something of a total rout, and in an extremely demanding and high-profile sector. Enterprise wins for GNU/Linux don't come much better than this."
This discussion has been archived. No new comments can be posted.

London Stock Exchange Rejects .NET For Open Source

Comments Filter:
  • by volxdragon ( 1297215 ) on Tuesday October 06, 2009 @04:34PM (#29662477)

    Why is this news? Sun/Solaris dominated the high-end financial sector for ages...any exchange/trading house/equity firm/etc that is using Windows is insane IMHO. Linux is just the most recent unix platform to show up in the sector, it's not revolutionary...

  • Not out of context (Score:5, Informative)

    by ameline ( 771895 ) <ian.ameline@Nospam.gmail.com> on Tuesday October 06, 2009 @04:44PM (#29662619) Homepage Journal

    How disingenuous.

    While it is 2.3ms faster it is also compared to 0.4ms (vs 2.7) making it 6.75 *times* faster.

    Sub ms latency in trading is a critical requirement for this application and .net on windows just wasn't up to the task.

    As a performance expert, this doesn't surprise me. In my opinion, current .net implementations are fundamentally unsuited to hard RT.

  • Still there (Score:5, Informative)

    by pavon ( 30274 ) on Tuesday October 06, 2009 @04:44PM (#29662625)

    Hehe:) For those that are interested, they still have a InfoElect case study [microsoft.com] from 2006 posted on their site, which I believe was the the precursor to TradElect.

  • by pavera ( 320634 ) on Tuesday October 06, 2009 @04:45PM (#29662639) Homepage Journal

    You didn't read the article did you?

    It was cheaper for them to buy the WHOLE COMPANY that had built this technology, than it was to continue running/maintaining a .NET application. The .NET application was built and maintained by accenture, who can just as easily hire cheap devs in india or sri lanka as any other outsourced IT consultancy.

    Also, they specifically state multiple times that the .NET solution would not scale to meet their needs, the quoted stats are 2.7ms/transaction in .NET and the linux app performs the same transaction in .4ms... So the linux system can handle 6-7 times the transactions on the same hardware...

    They are talking about scaling up from 100 million transactions a day to 5-6 billion, so, yeah having to buy 6 times less hardware will probably save them some cash.

  • by mvdwege ( 243851 ) <mvdwege@mail.com> on Tuesday October 06, 2009 @04:45PM (#29662647) Homepage Journal

    If you had read the earlier articles on the TradElect fiasco, you would have known that it was basically written and designed by Microsoft itself. Accenture had a very heavy involvement in the project straight from Redmond.

    So yes, this is an outright condemnation of the quality of Microsoft's products.

    Mart

  • Re:Still there (Score:5, Informative)

    by schon ( 31600 ) on Tuesday October 06, 2009 @04:48PM (#29662689)

    Heh.. I *love* this:

    Benefits

    One hundred per cent reliable on high-volume trading days

    Umm, yeah [guardian.co.uk].. for various definitions of the value "one hundred", right?

  • by Penguinisto ( 415985 ) on Tuesday October 06, 2009 @04:51PM (#29662719) Journal

    ...it's news because Microsoft bragged on .NET being in the LSE for a couple of years, pointing to it as proof that they were enterprise-ready and such.

    Then at about this time last year, the TradElect system (which was the .NET bits which ran the LSE) went 'splat', taking the London Stock Exchange down with it.

    The relevant info should be sitting right there in TFA.

  • by QuantumRiff ( 120817 ) on Tuesday October 06, 2009 @04:53PM (#29662749)

    any exchange/trading house/equity firm/etc that is using Windows is insane IMHO

    You mean like an exchange that was the cornerstone of MS's advertisements for 2 years? About how .NET was so scalable, it was used in the exchange, and SQL Server was so wonderful, it was used in the exchange...

    Well, it was the cornerstone of advertising until the exchange had a few day long technical outtage a year or so ago.. That left people in the dark, and they had to suspend all trading for a few days.. suddenly, the ads stopped.

  • Re:It's just a VM (Score:2, Informative)

    by bjourne ( 1034822 ) on Tuesday October 06, 2009 @04:58PM (#29662823) Homepage Journal

    It should also come as absolutely no surprise that a C++ pointer based linked list running native locally on the OS performs faster than a .Net Generics List running as CLR in the .Net run-time environment.

    This is incorrect, just like the rest of your message. A list in a high-level language is almost always faster than a linked list in C++, because it is array-backed. The whole list is in one contiguous block of memory and iterating it means incrementing a pointer sizeof(int) bytes each iteration. Iterating a linked list on the other hand, means jumping around in memory following pointers for each iteration. Which is much slower and induces an enormous cache penalty.

    This is basic stuff and just shows how futile using low-level languages is when people don't even understand why linked lists are slower than array lists...

  • by DragonWriter ( 970822 ) on Tuesday October 06, 2009 @05:00PM (#29662849)

    Why is this news? Sun/Solaris dominated the high-end financial sector for ages...any exchange/trading house/equity firm/etc that is using Windows is insane IMHO.

    Its news because, in fact -- whether or not it was "insane" to do so -- the London Stock Exchange was relying on Windows, .NET, and other Microsoft products: "As part of its strategy to win more trading business and new customers, the London Stock Exchange needed a scalable, reliable, high-performance stock exchange ticker plant to replace its earlier system. Roughly 40 per cent of the Exchange's revenues are generated by the sale of real-time information about stock prices. Using the Microsoft® .NET Framework in Windows Server® 2003 and the Microsoft SQL Server(TM) 2000 database, the new Infolect® system has been built to achieve unprecedented levels of performance, availability, and business agility. Launched in September 2005, it is maintaining the London Stock Exchange's world-leading service reliability record while reducing latency by a factor of 15. Its successful implementation, with support from Microsoft and Accenture, shows the London Stock Exchange's leadership in developing next-generation trading systems." (source: Microsoft [microsoft.com].)

  • by DragonWriter ( 970822 ) on Tuesday October 06, 2009 @05:02PM (#29662865)

    the LSE track record on technology is either funny or shameful, but certainly I would count them picking your system as an "endorsement" of the same order as being Bernie Madoff's accountant...

    Perhaps you see it that way, but Microsoft clearly disagrees [microsoft.com].

  • by pavera ( 320634 ) on Tuesday October 06, 2009 @05:02PM (#29662881) Homepage Journal

    Uhh... I've never seen this level of RTFA and.. man this is slashdot where that is the norm.

    the LSE ALREADY ENTERED A PURCHASE AGREEMENT TO BUY THE COMPANY that ALREADY BUILT A TRADING PLATFORM THAT IS BEING USED TODAY IN OTHER EXCHANGES! The deal closes in the next week or 2. The article says 95% of the "Non-Refundable" parts of the deal have already been transacted. Neither the LSE nor Millenium IT (the Sri Lankan company that is being purchased) is walking away from this deal.

    You don't spend $30 million dollars and purchase a company if you aren't moving your software to that platform. The article states they already had a trial phase and brought in originally 20 platforms, shortlisted 4, ran those for a period, and MilleniumIT won. They then decided to purchase the entire company. This process is much further along the road than you seem to think.

  • Missed the point... (Score:3, Informative)

    by RingDev ( 879105 ) on Tuesday October 06, 2009 @05:03PM (#29662893) Homepage Journal

    How disingenuous.

    Hardly. My complaint isn't about the TradElect software's performance. It was slower. But why was it slower? Is the implementation crap? Could it be redesigned to run faster while still running from the .Net framework? Or is it the inherent lag of running inside a sandbox that prevents it from executing as fast as the "GNU/Linux" solution?

    My complaint is that the author is roasting the .Net platform as compared to "GNU/Linux". That is like comparing the performance of Java to OS/2. One is a programing platform, the other is an OS.

    The author quotes from the article valid complaints about the TradElect system, and then extrapolates that due to the valid concerns LSE has with TradElect, that the .Net platform is inferior in all regards. Although he never explains what programming platform he believe .Net to be inferior to.

    That said, if I were working on a system that depended on sub millisecond execution of complex functionality, I probably wouldn't go with .Net either. The fact that you are running inside a VM-like sandbox explicitly means you are going to have worse performance than a natively compiled and executed application.

    -Rick

  • by pavera ( 320634 ) on Tuesday October 06, 2009 @05:06PM (#29662919) Homepage Journal

    You are not accurate. The LSE bought a dev shop that ALREADY BUILT A TRADING PLATFORM, that is being used today in other exchanges. The platform in question ALREADY achieves 6 times the performance of their existing platform (built by accenture), and has MORE FEATURES.

    And they are moving from an outsourced dev model to an in house model, as they now own the devs and the software. Sure they devs are still in Sri Lanka, but Accenture could just as easily hire people in India or Sri Lanka to get the same cost savings.

  • by ClosedSource ( 238333 ) on Tuesday October 06, 2009 @05:14PM (#29663053)

    If you think applications like this are examples of "hard RT", then the "RT" you refer to doesn't stand for "Real Time".

  • by DragonWriter ( 970822 ) on Tuesday October 06, 2009 @05:16PM (#29663075)

    From what I understand, it was the app that sucked. Why is this then a stinging indictment of the platform?

    Because Microsoft used the app and its supposed superiority in the area it was deployed as a major case study in the strength of the Microsoft programming and platform components used in the implementation: the products called out in their case study include the .NET Framework, Windows Server, SQL Server, Visual Studio .NET, Microsoft Operations Manager, ASP.NET, and Visual C# .NET (I may have missed some.) A quote: "In a highly competitive environment, the London Stock Exchange is gaining a considerable cost advantage over its competitors by using the .NET Framework and Visual C# .NET." (source: Microsoft [microsoft.com].)

    If it says something about the platform when you are getting the sale (because it outperforms the preceding implementation), then it also says something about the platform when the Exchange looks to replace you with something that outperforms your implementation.

  • Re:Wall Street (Score:5, Informative)

    by IANAAC ( 692242 ) on Tuesday October 06, 2009 @05:19PM (#29663119)
    They never were with Microsoft, at least not in the Chicago Operations Center when I worked for them. They were pretty hard-core Solaris, and slowly began switching their systems to Linux.
  • Re:How fast (Score:3, Informative)

    by Red Flayer ( 890720 ) on Tuesday October 06, 2009 @05:21PM (#29663137) Journal

    This was more "switched from Accenture to running it `in house' in the form of a large team of low-paid talent in Sri Lanka" way more than it was "abandoned .NET for Linux! Rah rah rah!". The fact that people are hilariously so focused on the latter while missing the former speaks to how incredibly myopic people can be.

    Horseshit. This is switching from "Accenture writing a slow unstable trading platform with .NET via cheap labor in India" to "buying the company that produces a fast, stable platform on Linux via cheap labor in Sri Lanka".

    You're right that it's not a closed-vs-open source idealogical move. It is, however, yet another demonstration that .NET is not a suitable platform for large-scale low-latency transactional systems... and that Linux-as-an-OS has better offerings in this market.

  • Re:It's just a VM (Score:1, Informative)

    by Anonymous Coward on Tuesday October 06, 2009 @05:22PM (#29663165)

    I used to work for what was at the time, the 4th largest clearing firm in the USA. We ran MS SQL and were processing 4M transactions/second. MS was using our company in their brochures for high performance transactional databases for years. I remember we chose MS because their offering was significantly faster per transaction.

    Being Federally regulated, we had multiple redundant systems in place, and had we or any of our partners suffered an all day outage like the LSE I'm quite sure we would have been liable.

  • Re:How fast (Score:1, Informative)

    by ergo98 ( 9391 ) on Tuesday October 06, 2009 @05:26PM (#29663209) Homepage Journal

    Horseshit. This is switching from "Accenture writing a slow unstable trading platform with .NET via cheap labor in India" to "buying the company that produces a fast, stable platform on Linux via cheap labor in Sri Lanka".

    And you're the one telling me to use Google?

    When the LSE switched to the .NET solution, they heralded the fact that it dropped them from 130ms+ to 10ms trading times. Now that they're switching from .NET to this new solution, they're saying it brings them from 2.7ms to 0.4ms.

    As far as reliability, purportedly the LSE had a single day of problems caused by never qualified reasons. That was enough for many to go "AHA! .NET!", but of course that's because they're ignorant morons who immediately demonstrate how little their opinion is worth. Failures at banks, airports, national systems, and so on, have happened on the gamut of platforms and systems, so it's marvelously telling when people demonstrate their ignorance.

  • Re:Wall Street (Score:2, Informative)

    by harlows_monkeys ( 106428 ) on Tuesday October 06, 2009 @05:26PM (#29663211) Homepage

    Didn't the New York Stock Exchange move over to Linux because Microsoft couldn't provide a good, low-latency RT kernel? They begged Microsoft, wanted to stay with Microsoft, and Microsoft couldn't provide them with a solution

    No. They are still running Windows. They are also running Linux.

    The big stock exchanges have a variety of largely independent systems, for different functions, and different kinds of markets. They moved one such systems from mainframes to Windows (running Cobol!) several years ago, and are still using it. More recently, they moved a different system to Linux.

  • Re:How fast (Score:4, Informative)

    by Red Flayer ( 890720 ) on Tuesday October 06, 2009 @05:40PM (#29663379) Journal

    and the "pie in the sky" element is that one of the reasons they decided to acquire this company is because now they have stars in their eyes about the great things they are going to do.

    Pie-in-the-sky is unobtainable by definition. Are you claiming that LSE won't be able to implement a trading platform with lower latency and better uptime than their current system? Or are you just claiming that LSE & MilleniumIT are being a little too optimistic in their press releases? Because the latter of those two is probably true.

    Gosh, you got it all covered there. I guess you provided a savage indictment of my post. Or maybe I'm actually a realist, and see a lot of people doing a hilarious happy dance far too prematurely. That's what she said!

    You made a very generic post about pie-in-the-sky cheap outsourcing to Sri Lanka. You appeared to have little-to-no actual knowledge of the subject, since none was communicated in you post (except the mention of Sri Lanka, which was gleanable from the first comment to the article on the site it was originally posted). You do not appear to be familiar with MilleniumIT.

    You call yourself a realist... yet realistic perspective is dependent upon knowledge of the subject. It's well known that most trading platforms are faster than the piece of crap they had on the LSE... often more than 25ms faster, which means that it was faster to trade on Euronext.

    But you know... whatever man... you can try to backtrack and defend your reactionary post however you want... you simply made claims that don't stack up to reality.

  • by e2d2 ( 115622 ) on Tuesday October 06, 2009 @05:40PM (#29663385)

    Yeah, we're a Microsoft "Partner" also where I work. It means free software and great support. What it does not mean is that they write our software, so I'm a bit skeptical of them actually putting their "in house" stamp on it. It sounds like marketing spin when the going was good.

  • Re:It's just a VM (Score:5, Informative)

    by Joe Mucchiello ( 1030 ) on Tuesday October 06, 2009 @05:42PM (#29663413) Homepage

    Yes, the list is contiguous in memory but that list is just a list of object pointers. The data is scattered around the heap just like the linked list data is scattered around the heap. Fast access to the object pointer does not yield any speed boosts. In C++ you could create an array of actual objects and then all the objects are contiguous in memory and incrementing to the next object is incrementing a point by sizeof(theObject). For small objects, you might be within the range of the memory cache on each increment. The managed object system most likely cannot possibly put the actual objects into contiguous memory and so you still have the cache misses when dereferencing the object pointers.

    So, tell us again who understands access characteristics of linked lists and array lists better?

  • Re:Awesome. (Score:1, Informative)

    by Anonymous Coward on Tuesday October 06, 2009 @05:43PM (#29663425)

    Nothing says "I have arrived, and I am a huge nerd" like a porcelain fountain filled with blood-red UV-reactive nonconducting coolant that is also piped through your computer on your teak desktop.

  • by kramulous ( 977841 ) on Tuesday October 06, 2009 @06:02PM (#29663623)

    You both are correct.

    Won't it depend on typical behaviour of the system at hand? Program for the typical case but prepare a single memcopy (not that I use them) for the 3 standard deviations case (and one for the outlier). Copying the memory, while expensive, *may* not be as expensive as extensive cache missing in the typical case.

  • by vertinox ( 846076 ) on Tuesday October 06, 2009 @06:03PM (#29663637)

    SEC Proposes Ban on Allowing Stock Flash Orders [bloomberg.com] (dated September 19th 2009)

    Democratic Senators Charles Schumer and Ted Kaufman urged the commission to halt the practice, arguing frequent traders use technology to profit from access to information not available to retail investors.

    Flash traders have direct connections to the NYSE exchange and pay large sums just for bandwidth to make sure the trades are almost real time. Goldman Sachs is a key participator in this.

    That said, their trades often have no human interaction and generally are computers following trading algorithms only a block away from the exchange with a direct fiber line to the office. It would be impossible otherwise.

    Some traders have been raising a stink over this, but generally the miliseconds do count.

    From http://seekingalpha.com/article/150397-flash-trading-goldman-sachs-front-running-everyone-else [seekingalpha.com]

    The maximum allowable time for a flash is 500 milliseconds, or half a second, although most of the markets flash routable orders for under 30 milliseconds.

    Of course I don't know how the LSE handles flash trading or even wants it but I'm going to assume they need everything to be as real time as possible. You just don't hear the finacial firms complaining about the disparities simply because they have the money to set up the transactions their servers pretty much next to the exchange itself (if not in the same building).

  • Re:It's just a VM (Score:5, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @06:07PM (#29663675) Journal

    If you're trying to shave run time on complex functions down to sub millisecond times, I would expect that bounds checking, type safety, and thread safety are low on your concerns.

    Curiously enough, C# lets you drop both bounds checking and type safety to exact same extent [microsoft.com] as plain C, with corresponding performance gains.

    It should also come as absolutely no surprise that a C++ pointer based linked list running native locally on the OS performs faster than a .Net Generics List running as CLR in the .Net run-time environment.

    What do you mean by "performs faster"? Iteration? Indexing? Insertion at front? Insertion at end? Removal? This is a surprisingly vague statement...

    I can bet you $1000 that System.Collections.Generic.List<int> will significantly outperform std::list<int> on indexed access on lists of significant size, for example, simply because the former is array-backed, and the latter is a doubly linked list. This is just to show how meaningless your comparison is.

    Now, yes, if you write idiomatic C# code for a linked list (using GC heap allocated objects and tracked references), it will be slower than equivalent C++ code because of all the safety checks (like null checks). But, of course, you can also use C# raw pointers and structs to write exact same code you would write for a linked list in C, and that would work just as fast (since it would compile to pretty much the same native code in the end).

  • Re:How fast (Score:5, Informative)

    by Red Flayer ( 890720 ) on Tuesday October 06, 2009 @06:09PM (#29663693) Journal

    As far as reliability, purportedly the LSE had a single day of problems caused by never qualified reasons.

    Purportedly a single day of problems?

    The exchange shut down during a high-volume trading session. That's not purported, that's fact. What's purported is the number of times HVTs observed execution delays on the LSE at other high-volume times... and that's one reason Euronext has been claiming increasing market share from LSE.

  • Re:Touche! (Score:5, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @06:12PM (#29663725) Journal

    A generic list, even if it is array based, is going to be on the stack an array of pointers to other points of the stack and the heap.

    If you use STL, then std::vector will also allocate its backing store array on heap.

    On the other hand, if you use C#, you can use stackalloc [microsoft.com] to get a stack-allocated, non-GC-tracked array.

    Managed .NET arrays (not stackalloc or unmanaged heap allocated) will still be slower because there are bound checks for element access (though JIT can eliminate them sometimes when it sees that they can never fail).

    Mutable generic collection classes are even more slow, because they also have safeguards to do things like throwing an exception if you get an enumerator for a collection, then remove an item from that collection, and then try to move the enumerator (whereas in C++, doing same thing for a vector would just render all active iterators invalid, and their use would lead to a crash at best, and silent data corruption at worst). This is achieved by storing a "version number" for a collection (just as plain int) which incremented it on every insertion/removal - and which enumerators check against every time you move them. Naturally, this increment happening on every insert also slows things down.

  • by kramulous ( 977841 ) on Tuesday October 06, 2009 @06:13PM (#29663733)

    NEWSFLASH!

    Journalist doesn't fully understand computer technologies.

    Are you really that surprised?

  • Re:It's just a VM (Score:5, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @06:14PM (#29663747) Journal

    Yes, the list is contiguous in memory but that list is just a list of object pointers ... In C++ you could create an array of actual objects and then all the objects are contiguous in memory and incrementing to the next object is incrementing a point by sizeof(theObject). For small objects, you might be within the range of the memory cache on each increment. The managed object system most likely cannot possibly put the actual objects into contiguous memory and so you still have the cache misses when dereferencing the object pointers.

    Not necessarily - this isn't true for any primitive types like int or float, and this isn't true for any user-defined structs.

    Unlike Java, C# lets you define your own types that don't have to be heap-allocated. For such types, exact same technique that you describe for C++ can also be used.

  • Re:How fast (Score:4, Informative)

    by poopdeville ( 841677 ) on Tuesday October 06, 2009 @06:15PM (#29663759)

    LSE isn't going to run setup.exe (sorry ./setup.so), they're going to have to do some large-scale integration work and customization to make it work with their system...?

    Huh? Trading platforms are trivial applications. Send data down the wire. Commit it. Get data back. Typically, these systems have multiple servers per stock offered at the exchange, each of them acting as a market maker/auctioneer to each others (trivial, a 10KB binary can do it, VERY QUICKLY). Each of the machines buffer trading history until it can be sent to the clearing house.

    There's little need to "customize" Linux. Linux already deals with the networking part just fine.

    The issue is writing the software using an easily maintainable, testable, and rigorously provable language. Credit Suisse is using Haskell for this purpose, very successfully. The only real difficulty is implementing the exchange rules regarding sorting the stock orders. That's going to be a real issue in any language. Sorting large sets is always expensive (but can be done in parallel).

  • Re:It's just a VM (Score:3, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @06:17PM (#29663777) Journal

    .NET JIT is fairly pessimistic and generally simple in kinds of optimizations that it performs. The reason for that is that .NET doesn't have a bytecode interpreter at all, only a JIT; therefore, a JIT has to be reasonably fast, otherwise too much time would be wasted on it alone. Therefore, it cannot be too sophisticated.

    In contrast, Java HotSpot has both bytecode interpreter and JIT, and interpreter is used by default, with JIT being triggered by a frequent invocation of one particular method, and just for that method. Because of that, Java JIT can - and does - perform much more sophisticated optimizations (such as escape analysis for stack allocation of objects).

  • Re:Wall Street (Score:3, Informative)

    by FranTaylor ( 164577 ) on Tuesday October 06, 2009 @06:38PM (#29663981)

    "wanted to stay with Microsoft"

    Where did that come from? NYSE has used Motif-based apps on HPUX and Linux for years and years.

  • by SuperKendall ( 25149 ) on Tuesday October 06, 2009 @06:43PM (#29664041)

    Won't it depend on typical behaviour of the system at hand?

    Yes, which is why I raised the point that in C++ you can choose the backing for your linked list. Back it on an array for iterating performance. Back it with truly linked nodes for better insertion properties.

  • Re:It's just a VM (Score:2, Informative)

    by msclrhd ( 1211086 ) on Tuesday October 06, 2009 @06:49PM (#29664081)

    The reason iterating through a list is likely to be slower than walking through contiguous memory is that you are likely to require more memory reads with list traversal. Contiguous memory access will keep the data in level 2 cache longer, so the CPU will not be waiting to get the memory.

    There are techniques for lists like using a pool allocator -- that is, allocating list items in blocks of n, so you are more likely to keep them in the L2 cache. That is: it all depends on how the data structures are implemented.

  • Re:How fast (Score:4, Informative)

    by knarf ( 34928 ) on Tuesday October 06, 2009 @07:00PM (#29664173)

    I find it humorous how quickly so many want to bask in the glow of this, using it as proof of something, when I'm fairly certain that it was discarded as proof of nothing when the LSE first went the .NET route.

    Well, someone [microsoft.com] certainly thought LSE was proof of something, why otherwise would they have bragged [microsoft.com] about it? Now that that bragging has been shown [guardian.co.uk] to be moot [computerworlduk.com] surely you can understand this modest amount of schadenfreude?

  • Re:How fast (Score:4, Informative)

    by gbjbaanb ( 229885 ) on Tuesday October 06, 2009 @07:26PM (#29664419)

    because, if you're going to write a trading platform that truly shows just how good .NET is, you'll want to get Microsoft to show you how. They wrote .NET after all, if they can't do it then no-one can.

    so true.

    In the end the Tradelect platform cost £40m. Buying the entire MilleniumIT company cost $30. note the currency symbols.

  • Re:It's just a VM (Score:3, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @07:39PM (#29664555) Journal

    Incidentally, you could use a shared_ptr and a vector to get the exact same behaviour of your C# list, including the same array-of-ptrs indexing, and the same cache misses for every object access.

    Where do you get the notion that C# List is an "array-of-ptrs"? It's simply false. If you use a reference type there, then, naturally, it will be that, but you don't have to use reference type in a C# List, just as you don't have to use pointers in std::vector. Use a C# struct, and you'll get the same contiguous memory block.

    Also, if you actually use vector<shared_ptr>, it will quite likely be slower than List<ref&gt, depending on C++ implementation on insertion, because of the unnecessary refcount updates on vector reallocation. Some implementations are smart enough to specifically optimize vector<shared_ptr> to avoid this (Visual C++ does that if you use its TR1 implementation for shared_ptr), but some aren't.

  • Re:De Icaza Responds (Score:2, Informative)

    by punzada ( 1557247 ) on Tuesday October 06, 2009 @08:25PM (#29664887)
    Why are you walking them through anything? you have SSH for that.
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Tuesday October 06, 2009 @10:06PM (#29665539)
    Comment removed based on user account deletion
  • Re:It's just a VM (Score:3, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @10:12PM (#29665583) Journal

    This is part correct, part wrong, and part outdated. In particular, branching by itself never blocked inlining, though complexity that results out of it may. Loops (i.e. any branching instruction that is potentially iterative - as there's no if/else or do/while/for on IL level) are not inlined. The detailed list is here [msdn.com], but note that this is pre-SP1.

    JIT inliner was made more aggressive [microsoft.com] in 3.5 SP1, and will, in particular, inline methods with struct parameters.

  • Re:How fast (Score:3, Informative)

    by Alpha830RulZ ( 939527 ) on Tuesday October 06, 2009 @10:52PM (#29665831)

    why didn't the LSE hire a team to develop a .NET system in-house, then?

    My takeaway from the article (yeah, I know, I read it, sorry) was they bought the company because it had a solution built. I'd be surprised if the platform used was anywhere near as important as functionality and performance. Which is as it should be.

    Now, it would be interesting to understand the history of the purchased outfit - how did they arrive at their decisions, and what would they do differently.

  • by Anonymous Coward on Tuesday October 06, 2009 @11:07PM (#29665927)

    ...it's news because Microsoft bragged on .NET being in the LSE for a couple of years, pointing to it as proof that they were enterprise-ready and such.

    Then at about this time last year, the TradElect system (which was the .NET bits which ran the LSE) went 'splat', taking the London Stock Exchange down with it.

    The relevant info should be sitting right there in TFA.

    Google Apps/Gmail has gone down multiple times in the past several years. I suppose that means linux sucks? I mean sheesh, this Microsoft solution only went down one time in 3 years. Google/Linux couldn't even handle the lower-volume, less-stressed, less mission critical email market...

    Implementation is the important factor.

    Oh, and it's not like the linux based NYSE has never [huffingtonpost.com] had an issue [reuters.com]. Or the Frankfurt Exchange [wsj.com]. Or the Australian Exchange [zdnet.com.au]. Or the Moscow Exchange [washingtonpost.com]. Or the Tokyo Exchange [washingtonpost.com].

    In fact, we have had 8 major failures of linux based exchanges this decade.

  • Re:It's just a VM (Score:4, Informative)

    by shutdown -p now ( 807394 ) on Tuesday October 06, 2009 @11:13PM (#29665961) Journal

    This is all nice and stuff in theory. Every so often, people sometimes like to try to argue that code running under a VM such a java or C# with .Net are "as fast" or faster than machine-compiled code from C or C++ because of JIT and runtime optimizations and whatnot.

    In case you haven't noticed, I'm not arguing that. I'm arguing that C# has all low level operations that C has, which allows you to write C# code on the same level of abstraction as C# code. Naturally, pointer arithmetic gets compiled to same native instructions in any language. Optimizer can improve things somewhat, and I won't argue that .NET JIT optimizer is on par with, say, gcc, but that difference is very circumstantial, and small even in worst cases.

    Unfortunately, the reality just doesn't follow the theory. In real-world benchmarks, managed code is not faster than pre-compiled machine code. Period.

    I never claimed it's faster, either. It is still slower even with hand-tuning that I've mentioned, simply because you cannot kill GC entirely (though if you never allocate from managed heap, GC will simply never run).

    Also, please don't drag Java into this. Java and C# are two very different languages by now, with C# having a much richer feature set, which is very much relevant to this discussion - since parts of that feature set are what enables C-like performance when needed. Furthermore, two most common runtime implementations for those languages - Microsoft .NET and Sun JVM - have radically different implementation strategies. As such, you cannot meaningfully translate your Java/JVM experience to C#/.NET, or vice versa.

  • Re:De Icaza Responds (Score:4, Informative)

    by the_womble ( 580291 ) on Wednesday October 07, 2009 @12:52AM (#29666507) Homepage Journal

    On the other hand, why have Millennium and everyone else who has developed a reasonably good trading platform chosen too use something other than Windows (Millennium originally used Solaris btw).

    Of course the LSE is not interested in using open source, but the fact is that an open source OS was the best solution they could find.

    I used to work for Millennium. I have already blogged about what I thought of the deal [moneyterms.co.uk].

  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Wednesday October 07, 2009 @02:53AM (#29667085)
    Looking at your benchmark: I knew that Microsoft had forgotten their c++ compiler, but I did not know it was that bad.
    Here is the code from gcc 4.4.1 -O3 -S (I took the code from your webside, and added the needed <int> to vector.

    I will admit I am not that good at reading asm, but it seems to me that gcc generate exactly the same loop code for stl(The code between L3 and L4),
    as Microsofts compiler does for the array case.

    So you might update your conclusion to say: Stl bloat your code, if you don't use a compiler which can inline and optimize code.

         .file    "data.cpp"
        .text
        .p2align 4,,15
    .globl _Z21populateWithStdVectorRSt6vectorIiSaIiEEi
        .type    _Z21populateWithStdVectorRSt6vectorIiSaIiEEi, @function
    _Z21populateWithStdVectorRSt6vectorIiSaIiEEi:
    .LFB435:
        .cfi_startproc
        .cfi_personality 0x0,__gxx_personality_v0
        pushl    %ebp
        .cfi_def_cfa_offset 8
        movl    %esp, %ebp
        .cfi_offset 5, -8
        .cfi_def_cfa_register 5
        movl    12(%ebp), %ecx
        pushl    %ebx
        testl    %ecx, %ecx
        jle    .L4
        .cfi_offset 3, -12
        movl    8(%ebp), %eax
        movl    (%eax), %ebx
        xorl    %eax, %eax
        .p2align 4,,7
        .p2align 3
    .L3:
        movl    %eax, %edx
        imull    %eax, %edx
        movl    %edx, (%ebx,%eax,4)
        addl    $1, %eax
        cmpl    %ecx, %eax
        jne    .L3
    .L4:
        popl    %ebx
        popl    %ebp
        ret
        .cfi_endproc
    .LFE435:
        .size    _Z21populateWithStdVectorRSt6vectorIiSaIiEEi, .-_Z21populateWithStdVectorRSt6vectorIiSaIiEEi
        .ident    "GCC: (GNU) 4.4.1 20090725 (Red Hat 4.4.1-2)"
        .section    .note.GNU-stack,"",@progbits

    data.cpp (The code I compiled)
    void populateWithStdVector( std::vector& v, int length ) { for (int i = 0; i < length; i++) { v[i] = i * i; } }
  • Re:De Icaza Responds (Score:4, Informative)

    by Xest ( 935314 ) on Wednesday October 07, 2009 @05:54AM (#29667751)

    "It doesn't take a rocket scientist to work out that a GC-based, VM-based language that has layers of intermediate execution is going to be slower than is required for a trading system."

    Actually, this is only true in an ever decreasing set of circumstances.

    See here for an explanation of some of the common reasons why this is often not the case:

    http://www.idiom.com/~zilla/Computer/javaCbenchmark.html [idiom.com]

    Also here are some benchmarks:

    http://kano.net/javabench/ [kano.net]

    These sites are focussed on Java, but the points are applicable to .NET also as it's on par nowadays. In .NET you also get the option of using unmanaged code anyway so you can have areas that don't require the VM to underlie execution.

    I'd imagine the real problem in this case was a combination of poor project management with poorly skilled developers in an attempt to make the profit margins for Microsoft and Accenture as big as possible. The net result though, as you can see, is quite bad. I do not believe for a second .NET was the problem as there is no reason it can't be used in a way that performs as well as or better than a C++ application. It would use a bit more memory to achieve that performance, but memory is cheap enough for this to not be an issue for most cases nowadays, particularly when you factor in the benefits of security and resilience you get from the managed parts of the codebase.

  • Re:Awesome. (Score:3, Informative)

    by V!NCENT ( 1105021 ) on Wednesday October 07, 2009 @07:46AM (#29668247)
    Get the facts, moron. Only 12% of the Linux kernel work is done by unpaid developpers. Red Hat makes a lot of money and the London Stock Exchange will not suffer from these 2 crashes that Windows caused. Crashing capitalism is more or less a Microsoft thing I guess. So shut the fsck up.
  • Re:De Icaza Responds (Score:5, Informative)

    by rastos1 ( 601318 ) on Wednesday October 07, 2009 @08:54AM (#29668687)

    But seriously what is wrong with having a registry?

    You can't do this in registry:

    ...
    #X11Forwarding - Specifies whether X11 forwarding is permitted. Default is "no"
    #rastos: changed to "yes" because boss B asked for it in e-mail sent on 7th of Oct 2009
    #rastos: with Subject "I'll throw some more chairs if this does not work tonight!"
    X11Forwarding yes
    ...

  • by gbjbaanb ( 229885 ) on Wednesday October 07, 2009 @11:07AM (#29670137)

    As a religious non-Java programmer and a devout Java basher, I'll shoot down the "Not suitable for nuclear reactors" thing

    you missed my dig at the Java EULA then.

    I also doubt there was a database used in this app, relatively speaking a traditional DB is a terrible performance bottleneck. This kind of app would utilise something different, possibly no sql involved at all.

  • by Cassini2 ( 956052 ) on Wednesday October 07, 2009 @12:24PM (#29671167)

    As for GC. Well, unless you can develop a system that eliminates memory allocation altogether and uses no threading while doing it. Good GC based environments (like CLR/Mono) are almost always faster than straight memory allocation. I highly recommend you research it... and if you're going to try and prove it with 5 lines of code, don't waste your time. That's not a real world test.

    Speaking as a real-time programmer, GC and memory allocations are enormously damaging to system performance. You really do need to switch to an almost statically allocated approach, with no memory allocations in real-time execution segments. The x86 architecture has special instructions to make the use of Base Pointer, Stack Pointer, and Index Pointer based memory access usable. If you ever program on a less powerful processor, like an 8-bit PIC microcontroller, you would quickly discover that indirect memory accesses have significant timing penalties. Direct memory access, where data is at fixed locations in system memory, can be accessed in a single instruction on almost all architectures.

    The second problem is that dynamic memory allocation has an unbounded maximum execution time. It can also be incredibly difficult to prove that memory accesses do not fragment, and that the program can execute in bounded memory space. Proving finite execution times and finite resource issues are major issues for a real-time system. In soft real-time systems, some forgiveness is tolerable. However, if you are in a language like C# and discover that one block of code is rate limiting because of memory allocation issues, how do you overcome the problem? In C/C++, you can statically allocate the memory blocks and work around the problem. In Java/C#, the issue is pretty much the end of the project.

    Test it instead for example with an XML parser that generates a DOM tree and then deletes/dereferences it.

    Simply put, you can't have algorithms like that in programs with bounded maximum execution times. What happens if the XML file is corrupt? Excessively large? A pathological case deliberately designed to take down the London Stock Exchange? An unbounded tree based on a customer provided data file is a bug in a LSE style application.

    Whenever I am looking at code blocks that need to execute quickly, the first thing I look for is blocks of code with unbounded memory, or unbounded execution times. C# encourages using these blocks of code. Real-time software requires using a small subset of available computer science techniques. Language and library support for this must be present.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...