Category: Portfolio

Modal Lab’s Series A and GA Launch

We are privileged to be a part of Modal’s Series A round to redefine the future of end-to-end cloud compute, providing serverless GPU to deploy AI models and run inference in the cloud with a few lines of code, without the complexities of setting up and maintaining your own cloud infra.

Modal Labs has launched its General Availability version. Everyone can try it out and run code in the cloud in a few minutes. Check it out today and you will be surprised how frictionless, scalable and cost-efficient it is!

Databricks’ Series I

Congrats to the Databricks team for the $500M Series I! We are so thrilled to double down our investment through its round, as an existing investor and faithful believer. We believe Databricks is incredibly positioned to be a generational company in the AI era, due to its robust tech/product, rapid pace of innovation, open-source root and vibrant ecosystem. Cannot wait to see what’s ahead for Databricks as it continues the journey of democratizing AI and Data through relentless innovation.

See more: press release on Forbes

MotherDuck Recognized on 2023 Enterprise Tech 30

Congrats to MotherDuck for being recognized on the 2023 Enterprise Tech 30 list. Quite an impressive lineup: https://axios.com/et30. MotherDuck was one of the five companies that was founded in the last 18 months.

Vellum Featured in Google I/O

Congrats to Vellum for being featured in Google I/O 2023 keynote as a DevTool partner.

LLM developers, sign up at vellum.ai to try it out!

2022 Annual Meeting

We celebrated 2022 with our LPs, portfolio company executives, advisors/fellows/scouts and friends through a memorable and vibrant Annual Meeting and Dinner on Dec 5. It was a great honor to have Prof. John Taylor (Professor of Economics at Stanford, former Under Secretary of US Treasury for International Affairs) to share his perspectives on Fed and U.S Economy. During the fireside chat, Anthony Sun (ex-Managing Partner of Venrock, co-founder of GGV), Joanne Chen (GP at Foundation Capital), Xuedong Huang (Technical Fellow and CTO of Azure AI, Microsoft) and Timothy Chou (Instructor of Cloud Computing at Stanford and ex-President of Oracle On Demand) discussed the future of data and AI. Eastlink portfolio companies presented their amazing achievements. We are grateful for your support and being a part of your journey!

MotherDuck Raises $47.5M Funding to Bring DuckDB to the Cloud

We at Eastlink Capital are so thrilled to be part of MotherDuck’s $12.5M Seed led by Redpoint and $35M Series A led by Andreessen Horowitz. Other co-investors include Madrona, Amplify and Altimeter. MotherDuck is a hosted cloud offering of DuckDB, one of the hottest open-source database projects known for its lightweight, fast performance and ease to use, but runs on single node & on premise. Think of MotherDuck vs. DuckDB like how Microsoft Office 365 brings Excel / Word, etc. to the cloud and facilitates team collaboration and version control.

Read the company’s announcement

Why We Invested in Exotanium, a Next-Gen Cloud Resource Management Platform

by Steven Xi, Siyu Jia

Compute cost has always been a concern of cloud adoption and burden for the enterprises that consume a lot of cloud resources, such as those that conduct workload-heavy simulation, graphics rendering and especially High Performance Computing. Given the current tightening macro environment, CIOs place a heavier emphasis on ROI, in addition to performance and features.

Quite a few cloud cost optimization solutions have sprung up — many of them remain at the level of analytics and automation, while only a few have the technical in-depth down to low-level infrastructure. Exotanium, backed by patented technology from research at Cornell University, provides a layer of “cloud-prem” control designed to work across different cloud providers. Only a small number of computer scientists have the ability to manipulate the low level code of the operating system in their ways, which provides Exotanium with a high technical moat. Through live migration technology that is transparent to container runtime systems, containerized applications can be seamlessly migrated between virtual machines (VM) instances. By leveraging cloud spot market, which is cost-effective but subject to termination at any time, Exotanium’s X-Spot can generate up to 90% reduction in cloud compute cost while maintaining high reliability.

“The most magic thing about Exotanium is that it can predict which spot VM instance is going to run out and relocate the containers out of it,” said a customer. Exotanium’s unique AI/ML algorithm analyzes the correlation between various signals and actual termination events, and automatically switches between spot instances and on-demand instances without interruptions to applications. This enables the long-running stateful apps to use cheap machines, and therefore resolves the tradeoff between cost and reliability beautifully. Having a legacy app that nobody really knows how it works? Not a worry — no need to modify a single line of your code!

Exotanium’s additional benefits include better security and performance, enabled by its X-Container technology, which combines the strength of application containers and virtual machines. Used to be positioned as a security company at the inception, Exotanium utilizes X-Container architecture to improve inter-container security isolation. In a project partnered with DOE Idaho National Lab, Exotanium demonstrated not only more than 70% cost savings, but also faster performance than the native implementation.

In addition to the iconic X-Spot, Exotanium’s X-Stack automatically packs or unpack containers into different numbers of VMs depending on the workload, while X-Scale adjusts the machine size and power based on the current demand. Through dynamic scaling up and down without shutting down the applications, cloud-based software companies can minimize the redundant resource reserved for peak workloads. Exotanium is working on expanding its product line with further optimization on cost, reliability and security, as well as flexibility for hybrid cloud and multi-cloud strategy. We look forward to seeing more magic from Exotanium to fine-tune enterprise customers’ control over cloud computing.

Behind it all is exactly the right team to unleash the power of cloud back-end optimization. It has been a great pleasure working with CEO Prof. Hakim Weatherspoon and CTO Dr. Zhiming Shen. They are world-class researchers in cloud computing and operating systems, and then co-founders of the Cornell spinout to commercialize their R&D innovation. The academic root of Exotanium mirrors our portfolio company Databricks, a leading data and AI company which originated from UC Berkeley’s RISELabs. InsightFinder, another Eastlink’s portfolio company with origins in academia, was spun off from Prof. Helen Gu’s group at N.C. State University and leverages unsupervised machine learning to predict IT incidents hours ahead.

We are excited to be part of Exotanium’s Series A round of financing. Eastlink Capital’s core investment strategy is to back technical founders with unique products / technologies, and Exotanium fits well with this theme. We strongly believe Exotanium is well on its way to take sig

InsightFinder $10M Series A Empowers IT With Incident Prevention

We are excited to continue backing InsightFinder in its Series A round. Founded by Prof. Helen Gu at the Department of Computer Science of North Carolina State University, InsightFinder predicts incidents hours ahead though unsupervised machine learning.

About InsightFinder

DevSecOps, IT operations, and site reliability engineering (SRE) teams rely on InsightFinder to predict and prevent outages in complex distributed architectures. Powered by unique patented capabilities for incident prediction, unsupervised active machine learning, and pattern-driven auto-remediation, the InsightFinder platform continuously learns from machine data to identify and fix problems before they impact web or application performance. Customers gain value quickly, starting with an InsightFinder free trial and the company’s pre-built integrations with Datadog, Elastic, New Relic, PagerDuty, Prometheus, ServiceNow and other popular tools for DevSecOps, IT operations management (ITOM) and IT service management (ITSM)

Why We Invested in StreamNative, a Next-Gen Data Streaming and Messaging Platform

by Steven Xi, Siyu Jia, Cindy Le

Processing and storing massive data in real time is hard, especially with the need to restore data integrity and velocity. Building reliable and efficient data streaming pipelines has become mission-critical and a bottleneck for many organizations.

An illustration by AltexSoft which shows the architecture of stream data processing

Since Kafka is a popular open-source data streaming community, some may ask, why invest in another data streaming and messaging company? Our thesis is simple: we believe the market is huge, multiple players can co-exist, and that StreamNative, a next-generation unified data streaming and messaging platform based on Apache Pulsar, brings unique and powerful capabilities to the market.

The massive and growing market is not winner-take-all. We believe that as companies’ messaging needs continue to evolve in data streaming, message queuing, and the need for unified messaging and streaming, StreamNative will continue to gain market share.

As many developers know, Confluent, which is based on Apache Kafka, a project that originally began in LinkedIn in 2011 and was committed to open source in 2014, pioneered data streaming and messaging, but was not built natively for the cloud. StreamNative, by comparison, was built for Kubernetes and cloud-native from inception. Due to its cloud-native architecture, Pulsar is more horizontally scalable than Kafka, with fewer laborious tasks like sharding or adding machines required.

As a unified messaging and streaming platform powered by Apache Pulsar, StreamNative has several advantages over its competition. For example, it has low latency in data processing. According to Nastel, a middleware market intelligence company, at higher message rates, Pulsar achieved the highest benchmarking test score among 7 products they tested. As one of the customers put it, “Pulsar vs. Kafka is like Spark vs MapReduce, huge differences. Especially when coming to message queuing and log streaming, Pulsar provides cutting-edge experience.”

StreamNative’s solutions are cost-efficient. Decoupling storage from computing has primarily reduced the demand for preserved capacity. In this way, organizations can afford to store event streams for longer durations. It further reduces the cost by assigning the data to suitable tiers based on its length of history and business value.

The overall architecture of StreamNative Platform

In addition to its technical advantages, StreamNative also has deep roots in the open-source community. Sijie Guo, the CEO, and Matteo Merli, the CTO, are among the top three committers of Apache Pulsar. Pulsar has 560+ contributors and 11.3k+ stars on GitHub as of this post, roughly half of those of Kafka, but its number of monthly active contributors now surpasses that of Kafka. The rapid adoption of Apache Pulsar and its growing open source community have brought huge benefits to StreamNative in the form of organic customer growth, with most customers coming directly from the Apache Pulsar community.

Because of its scalable infrastructure and cloud-native approach, StreamNative achieved 6x growth in revenue in 2021 and continues to see strong growth in 2022. Top markets include fintech, martech, consumer, retail, manufacturing, and IoT, and expanding partnerships with major cloud vendors are unlocking new channel opportunities.

We at Eastlink are thrilled to have led StreamNative’s last closing of its $23.7M Series A financing round in 2021. We share StreamNative’s vision of connecting every app, anywhere in the world, and are committed to helping founders to succeed with strategic, operational, and technical expertise.

*The content is provided for informational purposes only, and should not be viewed as legal, business, investment, or tax advice.

New Benchmark Shows TigerGraph’s Capacity To Handle Big Datasets

In a recently published benchmark report, TigerGraph’s powerful graph analytics software was put to the test using the respected Linked Data Benchmark Council (LDBC) Social Network Benchmark (SNB) Scale Factor 30k dataset, which features 36TB of raw data with 73 billion vertices and 534 billion edges. This was, as far as we can ascertain, the first time a graph database has been tested at this scale.

The LDBC SNB benchmark is an industry-respected testing methodology for confirming a graph platform’s performance while executing complex business intelligence and advanced analytics tasks.

This new study clearly demonstrates TigerGraph’s ability to handle a big graph workload in a real production environment, where tens of terabytes of connected data with hourly or daily incremental updates is the norm.

Scroll to top