.NET News Desk
Timing the Market with Distributed Genetics
Taking the winning side in trades using Genetic Programming and Grid Computing
By: Derek Ferguson
May. 30, 2008 12:30 PM
I’ve always been puzzled by the ability of some traders to consistently make money. A cynic would say that anyone who is able to profit in all adverse economic environments (recessions, depressions, etc.) is most likely able to do so because they are getting information that is not generally available. Although the cynic might mean “inside” information by this statement, I believe that there is a non-cynical interpretation of this statement that is, to some degree, correct.
Algorithmic trading engines and market data vendors are becoming increasingly important on Wall Street, exactly because they are able to give insights to traders, allowing them to consistently “beat the street.” These tools and data sources, though legal, can be sufficiently expensive so as to prevent the majority of market participants from obtaining similar access. This limited access can allow traders to consistently take the winning side in trades and outperform their peers.
Two of the most powerful technologies that are coming to the fore in this area are Genetic Programming and Grid Computing. In this article, I will briefly explain both of these concepts and then lay out a design you can use to combine them for your own benefit, using nothing but freely available components and data sources.
Contrary to popular belief, such systems have not gone away, nor have their mini-computer cousins – they have just been pushed increasingly into the background as sexier, PC-based technologies have come increasingly into vogue. As processor speeds have topped out and data volumes have continued increasing, however, the question has arisen – how can we use PCs to process volumes of data that seem increasingly likely to exceed the performance capabilities available to this model of computing?
Grid computing is a niche within Distributed Computing that seeks to find ways to intelligently combine the processing power of multiple, lower-power machines to provide a cumulative processing power and ability that is comparable to the “big iron” mainframes and mini-computers of old. This can be extremely cost-effective as a large number of off-the-shelf PCs can typically be obtained at a fraction of the cost of mainframe computing systems. This cost-effectiveness can be increased further still if the computing work to be done can be scheduled to be performed during “off hours” on existing PC hardware that is used for other purposes – employee work desktops, for example – during business hours.
Reader Feedback: Page 1 of 1
SOA World Latest Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week