Zero-Ops big data cloud platform enabling cluster computing at any scale.

London, United Kingdom

Big Data & Cloud Technologies


Today’s computing paradigms and systems software were not designed for the demands of Big Data. An algorithm that takes a data scientist hours or days to write at small data scale typically takes six months of engineering and DevOps from a small team to get to production at a specific big data scale. If demand grows, requiring even greater scale in cluster- or data-size, then additional re-engineering is required. The time-to-market for big data algorithms is thus several thousand times longer than that for their small data counterparts. Consequently, only a small fraction of data-science or AI algorithms go into production. Hadean has reimagined the foundational paradigms of computing by looking at what’s been done wrong over the last 40 years. It reinvents computing for the cluster age, delivering an operating system that takes standard UNIX-style programs and runs them at any scale, automatically growing or shrinking down hardware cluster-size to meet the demands of data-load and execution. Hadean renders big data as easy as small data bringing over a 1000x productivity gain over competing technologies. Using Hadean, a single programmer can write a program in the same few lines of code regardless of whether it needs to run on one machine or an entire datacenter. Hadean is the World's first and only compute technology capable of taking any algorithm (articulated using high-level primitives) and running it automatically at arbitrary scale. Making Hadean possible has required the reimagining of the entire compute stack from the operating system and compilers up. It is a compute framework that makes building massively-scalable software trivially easy. The equivalent of a simple Rust or Python script can run on Hadean across one machine or tens of thousands of machines and programs run perfectly predictably.