Monthly Archives: January 2013

Why C# is an Awesome Programming Language (Part 2, Specifics)

In my previous post, I’ve very briefly covered the history in the evolution of the C-esque languages, and concluded with the release of C# by Microsoft in the late 1990s.

C#, is and was, a direct consequence of the design and popularity of Java. As with Java, C# was released as part of the .NET framework – probably the largest codebase library ever released. And Microsoft also improved their development suite – Visual Studio .NET being the flagship.

Any two developers can engage in an endless discourse as to why one language is superior to another, but I’d like to address three very specific reasons why C# is simply awesome.

Firstly, the Visual Studio environment is hands-down the best development platform that exists today, period. Anyone who has used Eclipse and Visual Studio, if they are honest, will confirm the elegance, features, and power of VS. As someone who immensely enjoyed coding in C using xemacs, I became a true believer in C#/.NET when I started using Visual Studio. From a purist point of view, it is entirely true that MonoDevelop could be used instead of Visual Studio to write in C#, and that therefore this is not a particular advantage. However, I am particularly referencing the majority of C# developers who use VS.

Secondly, C# had incorporated, especially in version 3.0, a duo of extraordinarily powerful features: lambda expressions and LINQ that I simply refuse to give up in any other language. The simplicity and declarative nature of afforded by the combination of both these features has completely changed my programming style. This is particularly relevant when dealing with “big data” and analytics, which are my specialties (especially as it pertains to the capital markets).

Thirdly, C# has extensive support for build in parallelism. From PLINQ to the Task Parallel Library, I can maximize the resources available on my machine in an extraordinarily facilitated way. I don’t need to manually develop my own parallel/distributed framework, or use third-party add-ons (that may or may not be maintained frequently). Microsoft has made life much easier for developers who rely on taking full advantage of all their cores, and I commend them for this.

Finally, C# is very much a growing language. The most recent introduction of the Async framework in version 5.0 is a game changer. In a world where any sizable software system is virtually guaranteed to be asynchronous in nature, the inclusion of power built-in language support is a another incredibly effective gift from Microsoft. Just two keywords: await and async will completely change the way that a developer can process multiple, complex data streams or queries in real-time. And all this for free.

At the end of the day, it’s not going to be one specific feature or theoretical characteristic of a language that will render it an exceptional framework. It is rather the utility, the practicality of being able to simply get things done that will measure the success of a language or platform. As a front-line developer working under rigorous deadlines and having to maintain large scale systems, this is the simplest yet most profound reason for using a particular framework.

It is completely understandable that different developers will feel extremely comfortable with other platforms, operating systems, and languages, but in these two brief discussions I really wanted to explain why I feel that C# is awesome.

So I urge developers, even those who are not using the Microsoft stack, to consider Mono and C# to get a feel for how powerful the language is and how much built-in support facilitates getting your work done quickly and correctly.

Advertisements

Why C# is an Awesome Programming Language (Part 1, Background)

As the case with many Gen-X developers (or perhaps even Baby Boomers), I’ve worked with a wide variety of programming languages, but almost invariably the development environment for large scale projects was C or C++, often glued together by a variety of scripts and build tools.

C and C++ suffer are detrimental from two ends of the same spectrum. With C, the standard library only provides eighty seven functions, and usually entire libraries have to be written or procured from a third party. The saying used to be that when you hire a C developer, you are really buying their libraries. Of course, the standards have changed, and the ubiquity of the internet makes it possible to procure and share well-tested code easily. C also suffers from platform-dependence. The primary advantage of C is its simplicity and exceptional speed, which
are consequences of its evolutionary proximity to assembly language which greatly facilitates compiler optimizations.

When C++ was created at Bell Laboratories, the goals that were sought were reusability and the ability to develop large-scale systems through the collaborative work of teams of developers. Ostensibly, C++ is an object-oriented language, replete with encapsulation, polymorphism and inheritance. A language developed for these goals, and with an object-oriented approach would have been an ideal evolutionary step in the 1980s.

But there were two major flaws that were lethal to C++, and continue to haunt the billions of lines in codebases that have been developed since. First, C++ was to be backwardly compatible with C, and in fact the earliest C++ compilers were preprocessors that converted C++ code into C. This single language decision resulted in all of the atrocities of C (global variables, macros, conditional compilation) now inherited by C++. So from inception, bad C code could be processed perfectly as bad C++ code. And this continues to contemporary periods, though developers today would assume strong measures to avoid dangerous coding habits and design.

The second, more nuanced bombshell within C++ is that even though C++ can be regarded as an object-oriented langauge, it also can be viewed as a generic template-driven programming language. These two paradigms are completely orthogonal, and almost without exception, the single largest design flaws in C++ systems is a result of simultaneously combining both paradigms. Imagine nested template-driven code that would be applied a complex class hierarchy. To manage and continue to add to such a codebase would be a task worthy of Hercules. In my opinion, this is why highly experienced C++ developers are always in demand – only true experts can work with the multiple-inheritance, C-backward compatible template-driven complex class hierarchies that can be found in modern large scale C++ systems.

The year was 1993, and enter Java – an experimental language under development at Sun Microsystems (codename “Oak”) as a platform-independent language in which a virtual machine enables a “Write Once, Run Anywhere” development paradigm. Fortunately the designers of Java had decided to break ties with the inescapable detriments afforded by C/C++, and instead focus on designing a modern language for the 99% replete with automatic garbage collection, single inheritance, and a robust built-in library. Originally, Java was expected to run within browsers on the client side. However, the formidable qualities of the programming language from both business and development perspectives became exquisitely evident, and Java was being used in back end server-side projects, with a web presentation interface using an assortment of ancillary technologies. From the earliest versions through Java 7, a host of incremental refinements, performance improvements, and language features such as generics have resulted in Java supplanting C/C++ as the “gold standard” for software development. It was far easier for universities to teach and for students to learn Java, and it attained ubiquity in the professional development community.

Microsoft, beign the notorious laggard in innovation that it is, quickly realized that they were rapidly losing market share as the world moved towards the browser rather than applications, and developers were flocking to the new Java gold standard of modern programming language design. They responded in the same manner they dealt with in the past with other threats (including MacOS, Netscape Navigator) – they created a imitation product that retained the flagship features of the cloned product. Anders Hejlsberg, a designer of Turbo Pascal and Delphi, was in charge of this process. When C# 1.0 was first released, I can vividly recall not beign able to distinguish whether the code was Java or C#. The syntax, design, and even keywords were closely related. In fact, as of the late 1990s and early 2000s, writing a translator between Java
and C# would have been a modestly trivial enterprise. Of course, as the languages have evolved and the libraries grew enormously,
such a task would be much more complex today.

The stage was now set for Microsoft to actually innovate in terms of programming language design, and create a superb development environment and framework. I will follow this up shortly in another post (Part 2, Specifics).

Srikant Krishna (contact: sri@srikantkrishna.com) is a financial technologist and quantiative trader. He has a strong background in biophysics, software development, and the capital markets.

My experience with big data

Big Data

My background is a bit unique in that I have had the opportunity to work intensively with three very different types of (very) big data:  computational genomic data, retail transactional repositories, and capital market instrument pricing.  Because of the very different characteristics of each of these types of data (except for the large scale nature of the datasets), I’ve had the opportunity to work on a variety of different approaches for highly efficient storage, retrieval, and analysis of the data.  Particularly in the capital markets domain, the real-time nature of the datasets necessitated mission-critical procedures for manipulating the data.  I’ve also had the opportunity to use highly specialized databases and languages (kdb+), as well as developing specialized proprietary database management systems for handling genomic data or retail transactional datasets. 

My experience with analytics

Analytics

The purpose of most of my software development was, almost without exception, analytics and quantitative model-building.  From ordinary optimization algorithms to “unconventional” techniques for trading models, I have very in-depth experience in datamining, machine-learning, quantitative analysis, and artificial intelligence techniques.  My experience in analytics ranges from highly quantitative (support vector machines, probabilistic models, computational genomics), to heuristic (various machine learning/AI methods, rule-based systems), to highly computational (brute force parallel/distributed evaluation).  Throughout the course of my career I usually have had to present the results and conclusions to colleagues, senior management, clients, or my own team for review and discussion.  I have written or co-authored numerous peer-reviewed published papers, whitepapers, and large-scale internal documents.  I have co-authored two patents for my analytic work, and was directly responsible for the design of specialized algorithms. 

Empathy floods ‘brain’s pleasure centres’

The Only Way Is Ethics.Net

When we empathise, oxytocin and other chemicals flood the brain’s pleasure centres, according to one theory reported in The Observer on Sunday. According to the article, similar surges occur when people are asked to play economic exchange games designed to elicit trust. According to neuroeconomist Paul Zak, this suggests empathy and trust are two sides of the same adaptive response.

Read the full story on The Observer here: http://guardian.newspaperdirect.com/epaper/viewer.aspx

View original post