Distributive – Shaping the Future of Web-Based Distributed Computing
Whether it’s developing new materials, designing future transportation systems, or investigating the properties of distant galaxy clusters – computational models have become one of humanity’s most versatile tools, no matter if for understanding the quirks of our universe or designing a new hardware product.
Yet, while most scientists excel at developing equations that model the universe, very few know how to manage data centers and HPC clusters to run their large computational workloads at scale. This is why Distributive, a young Canadian company, has developed a framework that allows users to easily run computational workloads across any and all devices they already own – fast and securely. It has gone through the Canadian Technology Accelerator, has received a major health project from the Canadian Digital Technology Supercluster, and has several other projects ongoing with universities, hospitals, and other enterprises.
Learn more about the future of web-based distributed computing from our interview with Distributive’s CEO, Dan Desjardins:
Why did you launch Distributive?
While doing my Ph.D. in computational electrodynamics, I realized how dependent I was on computing power to crunch through my models to produce results. Computational models are a catalyst for innovation: from discovering new pharmaceutical drugs to optimizing golf club aerodynamics, everything is done using computer models instead of actually testing millions of different ingredients or manifolds. But setting up these computational models and accessing the necessary computational power was a major headache – not just for me but for millions of my research colleagues. We’re good at physics, mathematics, and our respective disciplines, but ask us to operate and manage a supercomputer? That’s an entirely different skill set and we don’t have time for it. We have questions about the universe to answer.
There’s a reason why AWS enlists an army of “solutions experts” who basically handhold scientists and onboard their workloads. It’s also unbelievably expensive. One thousand vCPU cores on IBM cloud is US $35,000 per month. To address these pains, we have developed the Distributive Compute Protocol (DCP). It’s a compute middle-layer that makes it very easy (five lines of code) to deploy computational workloads in parallel across idle devices all around the world, such as PCs, servers, or even fridges and other IoT devices, and it’s 1/10th the cost. We’ve been quiet and avoiding undue hype, but we’ve been moving mountains.
How does it work?
If you want to build a house, it will go faster if you have many friends to help you. It’s very much the same with parallel computing: You slice a computational problem into pieces, run these in parallel on many different processors and later reassemble them to obtain the final results quicker. The concept of distributed computing isn’t new of course, it’s at least 30 years old.
What’s new is that our Distributive Compute Protocol is uniquely built on web standards and can execute parallel computations anywhere, on anything, like your browser, bare metal, Kubernetes clusters, a Docker container, from your command line – simply anything that can run the web stack. It allows users to run computational workloads on their own infrastructure fast and securely, or on a public network made up of node providers worldwide!
Besides supporting customers to set up local, closed networks for computing, we also build a globally distributed computer – like a broker for computing resources. Most cloud providers today charge you based on the availability of an instance type you’re consuming. When you think about this like an electrical bill, it would be like being charged by toaster-hour and fridge-minute. Instead, we meter workloads based on actual resources consumed – so we charge job clients and remunerate compute nodes based on actual computational resources consumed, not the availability of instance types.
How did you evaluate your startup idea?
To get started, it helped a lot that I was solving my own problem of setting up and running computational models for electrodynamics – and knowing many colleagues with similar struggles. That’s why we structured our go-to-market through academia, contacting professors whose research involves computational modeling, hosting student hackathons, and having student interns that would eventually become employees. That way, we got valuable feedback early on.
Then others took notice, and, in particular, hospitals were asking us whether we could optimize their digital imaging and surgical block schedules – so we set up on-prem computing networks for them that use their existing computers. As of today, we have many projects underway with several more hospitals, the Canadian government, air transport and manufacturing companies, and half a dozen universities.
To us, the opportunity is blatantly clear: the cloud computing market is expected to grow to over $800B in the next five years, and Distributive will eventually unlock billions of dollars worth of computing value. Yet, establishing trust and credibility is crucial: certain venture capitalists have been dumping millions into crypto startups starting token-based compute platforms, and go on to conduct dubious ICOs. This repels credible users. AI practitioners, sophisticated laboratories, healthcare institutions, and professional researchers do not “buy tokens”. “Buy low, compute high” sounds ridiculous because it is ridiculous. The company that will build the winning distributed computing solution will have to be credible and have a good API.
Advice for fellow deep tech founders: Deep tech is like a bridge. It’s infrastructure, and most take infrastructure usually for granted. That’s why assuming ‘let’s build it, and they will come and use it’ is a fallacy, and is especially dangerous for deep tech startups.
Starting a deep tech startup has the highest risk, but potentially also the highest upside. You often need to invest millions upfront to get started before any commercial gain can be shown to the investors upon which you depend to build out the next phase. You can’t simply expect that all it takes to get traction is to complete your platform MVP – nobody cares about bridges, you need solutions on top of it solving real problems for real customers paying real cash. No revenue, no proven traction.
People love going to Walmart and never think about the bridges they cross to get there. But without bridges, the cement trucks and construction vehicles wouldn’t have been able to build that Walmart, supply trucks would be able to replenish the shelves, and you wouldn’t be able to get there. As I said, people take infrastructure for granted. So if you are building an infrastructure project, go and find customers to consume solutions built on top of your platform as soon as possible to prove traction. Be prepared for pushback unless you already have paying customers. Vision is underappreciated these days.
Who should contact you?
We are happy to talk to fellow researchers and distributed computing enthusiasts – please contact us for a full enterprise trial of our technology.
Please see our main portal at dcp.cloud, and feel free to check out our demo here – once you scroll down and press ’compute,’ it will start an electrodynamics calculation for a certain type of coil, which I have explored in my Ph.D. thesis. If you then go to dcp.mn, and press ‘start,’ your computer becomes part of a global network to run this electrodynamics demo example. You’ll find more cool stuff on our GitHub.
Digital technology supercluster investing in eight new projects through Covid-19 program – Looking Glass – Also the project ‘Looking Glass’ by Distributive (formerly Kings Distributed Systems) that informs public policymaking by science-driven modeling.