AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.
What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands, cursors, catalog views, data types, triggers, stored procedures and functions.
The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.
“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”
PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.
The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.
“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.
Survival and strategy games are often played in stages. You have the early game where you’re learning the ropes, understanding systems. Then you have mid-game where you’re executing and gathering resources. The most fun part, for me, has always been the late mid-game where you’re in full control of your powers and skills and you’ve got resources to burn — where you execute on your master plan before the endgame gets hairy.
This is where Apple is in the game of power being played by the chip industry. And it’s about to be endgame for Intel.
Apple has introduced three machines that use its new M1 system on a chip, based on over a decade’s worth of work designing its own processing units based on the ARM instructions set. These machines are capable, assured and powerful, but their greatest advancements come in the performance per watt category.
I personally tested the 13” M1 MacBook Pro and after extensive testing, it’s clear that this machine eclipses some of the most powerful Mac portables ever made in performance while simultaneously delivering 2x-3x the battery life at a minimum.
These results are astounding, but they’re the product of that long early game that Apple has played with the A-series processors. Beginning in earnest in 2008 with the acquisition of PA Semiconductor, Apple has been working its way towards unraveling the features and capabilities of its devices from the product roadmaps of processor manufacturers.
The M1 MacBook Pro runs smoothly, launching apps so quickly that they’re often open before your cursor leaves your dock.
Video editing and rendering is super performant, only falling behind older machines when it leverages the GPU heavily. And even then only with powerful dedicated cards like the 5500M or VEGA II.
Compiling projects like WebKit produce better build times than nearly any machine (hell the M1 Mac Mini beats the Mac Pro by a few seconds). And it does it while using a fraction of the power.
This thing works like an iPad. That’s the best way I can describe it succinctly. One illustration I have been using to describe what this will feel like to a user of current MacBooks is that of chronic pain. If you’ve ever dealt with ongoing pain from a condition or injury, and then had it be alleviated by medication, therapy or surgery, you know how the sudden relief feels. You’ve been carrying the load so long you didn’t know how heavy it was. That’s what moving to this M1 MacBook feels like after using other Macs.
Every click is more responsive. Every interaction is immediate. It feels like an iOS device in all the best ways.
At the chip level, it also is an iOS device. Which brings us to…
iOS on M1
The iOS experience on the M1 machines is…present. That’s the kindest thing I can say about it. Apps install from the App Store and run smoothly, without incident. Benchmarks run on iOS apps show that they perform natively with no overhead. I even ran an iOS-based graphics benchmark which showed just fine.
That, however, is where the compliments end. The current iOS app experience on an M1 machine running Big Sur is almost comical; it’s so silly. There is no default tool-tip that explains how to replicate common iOS interactions like swipe-from-edge — instead a badly formatted cheat sheet is buried in a menu. The apps launch and run in windows only. Yes, that’s right, no full-screen iOS apps at all. It’s super cool for a second to have instant native support for iOS on the Mac, but at the end of the day this is a marketing win, not a consumer experience win.
Apple gets to say that the Mac now supports millions of iOS apps, but the fact is that the experience of using those apps on the M1 is sub-par. It will get better, I have no doubt. But the app experience on the M1 is pretty firmly in this order right now: Native M1 app>Rosetta 2 app>Catalyst app> iOS app. Provided that the Catalyst ports can be bothered to build in Mac-centric behaviors and interactions, of course. But it’s clear that iOS, though present, is clearly not where it needs to be on M1.
There is both a lot to say and not a lot to say about Rosetta 2. I’m sure we’ll get more detailed breakdowns of how Apple achieved what it has with this new emulation layer that makes x86 applications run fine on the M1 architecture. But the real nut of it is that it has managed to make a chip so powerful that it can take the approximate 26% hit (see the following charts) in raw power to translate apps and still make them run just as fast if not faster than MacBooks with Intel processors.
It’s pretty astounding. Apple would like us to forget the original Rosetta from the PowerPC transition as much as we would all like to forget it. And I’m happy to say that this is pretty easy to do because I was unable to track any real performance hit when comparing it to older, even ‘more powerful on paper’ Macs like the 16” MacBook Pro.
It’s just simply not a factor in most instances. And companies like Adobe and Microsoft are already hard at work bringing native M1 apps to the Mac, so the most needed productivity or creativity apps will essentially get a free performance bump of around 30% when they go native. But even now they’re just as fast. It’s a win-win situation.
My methodology for my testing was pretty straightforward. I ran a battery of tests designed to push these laptops in ways that reflected both real world performance and tasks as well as synthetic benchmarks. I ran the benchmarks with the machines plugged in and then again on battery power to estimate constant performance as well as performance per watt. All tests were run multiple times with cooldown periods in between in order to try to achieve a solid baseline.
Here are the machines I used for testing:
Right up top I’m going to start off with the real ‘oh shit’ chart of this piece. I checked WebKit out from GitHub and ran a build on all of the machines with no parameters. This is the one deviation from the specs I mentioned above as my 13” had issues that I couldn’t figure out so I had some Internet friends help me.
As you can see, the M1 performs admirably well across all models, with the MacBook and Mac Mini edging out the MacBook Air. This is a pretty straightforward way to visualize the difference in performance that can result in heavy tasks that last over 20 minutes, where the MacBook Air’s lack of active fan cooling throttles back the M1 a bit. Even with that throttling, the MacBook Air still beats everything here except for the very beefy MacBook Pro.
But, the big deal here is really this second chart. After a single build of WebKit, the M1 MacBook Pro had a massive 91% of its battery left. I tried multiple tests here and I could have easily run a full build of WebKit 8-9 times on one charge of the M1 MacBook’s battery. In comparison, I could have gotten through about 3 on the 16” and the 13” 2020 model only had one go in it.
This insane performance per watt of power is the M1’s secret weapon. The battery performance is simply off the chart. Even with processor-bound tasks. To give you an idea, throughout this build of WebKit the P-cluster (the power cores) hit peak pretty much every cycle while the E-cluster (the efficiency cores) maintained a steady 2GHz. These things are going at it, but they’re super power efficient.
In addition to charting battery performance in some real world tests, I also ran a couple of dedicated battery tests. In some cases they ran so long I thought I had left it plugged in by mistake, it’s that good.
I ran a mixed web browsing and web video playback script that hit a series of pages, waited for 30 seconds and then moved on to simulate browsing. The results return a pretty common sight in our tests, with the M1 outperforming the other MacBooks by just over 25%.
In fullscreen 4k/60 video playback, the M1 fares even better, clocking an easy 20 hours with fixed 50% brightness. On an earlier test, I left the auto-adjust on and it crossed the 24 hour mark easily. Yeah, a full day. That’s an iOS-like milestone.
The M1 MacBook Air does very well also, but its smaller battery means a less playback time at 16 hours. Both of them absolutely decimated the earlier models.
This was another developer-centric test that was requested. Once again, CPU bound, and the M1’s blew away any other system in my test group. Faster than the 8-core 16” MacBook Pro, wildly faster than the 13” MacBook Pro and yes, 2x as fast as the 2019 Mac Pro with its 3.3GHz Xeons.
For a look at the power curve (and to show that there is no throttling of the MacBook Pro over this period (I never found any throttling over longer periods by the way) here’s the usage curve.
Unified Memory and Disk Speed
Much ado has been made of Apple including only 16GB of memory on these first M1 machines. The fact of it, however, is that I have been unable to push them hard enough yet to feel any effect of this due to Apple’s move to unified memory architecture. Moving RAM to the SoC means no upgradeability — you’re stuck on 16GB forever. But it also means massively faster access
If I was a betting man I’d say that this was an intermediate step to eliminating RAM altogether. It’s possible that a future (far future, this is the play for now) version of Apple’s M-series chips could end up supplying memory to each of the various chips from a vast pool that also serves as permanent storage. For now, though, what you’ve got is a finite, but blazing fast, pool of memory shared between the CPU cores, GPU and other SoC denizens like the Secure Enclave and Neural Engine.
While running many applications simultaneously, the M1 performed extremely well. Because this new architecture is so close, with memory being a short hop away next door rather than out over a PCIE bus, swapping between applications was zero issue. Even while tasks were run in the background — beefy, data heavy tasks — the rest of the system stayed flowing.
Even when the memory pressure tab of Activity Monitor showed that OS X was using swap space, as it did from time to time, I noticed no slowdown in performance.
Though I wasn’t able to trip it up I would guess that you would have to throw a single, extremely large file at this thing to get it to show any amount of struggle.
The SSD in the M1 MacBook Pro is running on a PCIE 3.0 bus, and its write and read speeds indicate that.
The M1 MacBook Pro has two Thunderbolt controllers, one for each port. This means that you’re going to get full PCIE 4.0 speeds out of each and that it seems very likely that Apple could include up to 4 ports in the future without much change in architecture.
This configuration also means that you can easily power an Apple Pro Display XDR and another monitor besides. I was unable to test two Apple Pro Display XDR monitors side-by-side.
Cooling and throttling
No matter how long the tests I ran were, I was never able to ascertain any throttling of the CPU on the M1 MacBook Pro. From our testing it was evident that in longer operations (20-40 minutes on up) it was possible to see the MacBook Air pulling back a bit over time. Not so with the Macbook Pro.
Apple says that it has designed a new ‘cooling system’ in the M1 MacBook Pro, which holds up. There is a single fan but it is noticeably quieter than either of the other fans. In fact, I was never able to get the M1 much hotter than ‘warm’ and the fan ran at speeds that were much more similar to that of a water cooled rig than the turbo engine situation in the other MacBooks.
Even running a long, intense Cinebench 23 session could not make the M1 MacBook get loud. Over the course of the mark running all high-performance cores regularly hit 3GHz and the efficiency cores hitting 2GHz. Despite that, it continued to run very cool and very quiet in comparison to other MacBooks. It’s the stealth bomber at the Harrier party.
In that Cinebench test you can see that it doubles the multi-core performance of last year’s 13” MacBook and even beats out the single-core performance of the 16” MacBook Pro.
I ran a couple of Final Cut Pro tests with my test suite. First was a 5 minute 4k60 timeline shot with iPhone 12 Pro using audio, transitions, titles and color grading. The M1 Macbook performed fantastic, slightly beating out the 16” MacBook Pro.
With an 8K timeline of the same duration, the 16” MacBook Pro with its Radeon 5500M was able to really shine with FCP’s GPU acceleration. The M1 held its own though, showing 3x faster speeds than the 13” MacBook Pro with its integrated graphics.
And, most impressively, the M1 MacBook Pro used extremely little power to do so. Just 17% of the battery to output an 81GB 8k render. The 13” MacBook Pro could not even finish this render on one battery charge.
As you can see in these GFXBench charts, while the M1 MacBook Pro isn’t a powerhouse gaming laptop we still got some very surprising and impressive results in tests of the GPU when a rack of Metal tests were run on it. The 16″ MBP still has more raw power, but rendering games at retina is still very possible here.
The M1 is the future of CPU design
All too often over the years we’ve seen Mac releases hamstrung by the capabilities of the chips and chipsets that were being offered by Intel. Even as recently as the 16” MacBook Pro, Apple was stuck a generation or more behind. The writing was basically on the wall once the iPhone became such a massive hit that Apple began producing more chips than the entire rest of the computing industry combined.
Apple has now shipped over 2 billion chips, a scale that makes Intel’s desktop business look like a luxury manufacturer. I think it was politic of Apple to not mention them by name during last week’s announcement, but it’s also clear that Intel’s days are numbered on the Mac and that their only saving grace for the rest of the industry is that Apple is incredibly unlikely to make chips for anyone else.
Years ago I wrote an article about the iPhone’s biggest flaw being that its performance per watt limited the new experiences that it was capable of delivering. People hated that piece but I was right. Apple has spent the last decade “fixing” its battery problem by continuing to carve out massive performance gains via its A-series chips all while maintaining essentially the same (or slightly better) battery life across the iPhone lineup. No miracle battery technology has appeared, so Apple went in the opposite direction, grinding away at the chip end of the stick.
What we’re seeing today is the result of Apple flipping the switch to bring all of that power efficiency to the Mac, a device with 5x the raw battery to work with. And those results are spectacular.
GitHub defies a takedown order, Strava raises a big round and Moderna reports promising COVID-19 vaccine results. This is your Daily Crunch for November 16, 2020.
The big story: GitHub reinstates YouTube downloading project
Back in October, the Recording Industry Association of America sent a DMCA complaint to GitHub over a project called YouTube-dl, which allows viewers to download YouTube videos for offline viewing. According to the trade group, YouTube-dl both circumvented DRM and, in its documentation, promoted the piracy of several popular songs.
However, the Electronic Frontier Foundation sent GitHub a letter criticizing the RIAA’s argument and suggesting that, among other things, it mischaracterizes how YouTube-dl’s code actually works.
In response, GitHub has restored the project’s code. It also says it’s rethinking how it will handle takedown notices in the future, with a new $1 million developer defense fund and the response of technical and legal review of any future claims filed under section 1201 of the DMCA.
The tech giants
You can now embed Apple Podcasts on the web — Apple is making it easier to discover and listen to podcasts via the web.
Apple’s IDFA gets targeted in strategic EU privacy complaints — The complaints, lodged with German and Spanish data protection authorities, contend that Apple’s setting of the IDFA breaches regional privacy laws.
Spotify adds a built-in podcast playlist creation tool, ‘Your Episodes’ — The feature lets you bookmark individual episodes from any podcast, which are then added to a new “Your Episodes” playlist.
Startups, funding and venture capital
Strava raises $110M, touts growth rate of 2 million new users per month in 2020 — Strava has 70 million members already according to the company, with presence in 195 countries globally.
Squarespace adds support for memberships and paywalled content — Squarespace’s new Member Areas allow businesses to charge for access to exclusive content.
Computer vision startup Chooch.ai scores $20M Series A — Chooch.ai hopes to help companies adopt computer vision more broadly.
Advice and analysis from Extra Crunch
Will edtech empower or erase the need for higher education? — Campuses are closed, sports have been paused and, understandably, students don’t want to pay the same tuition for a fraction of the services.
Three growth tactics that helped us surpass Noom and Weight Watchers — Over the past year, nutrition app Lifesum has acquired users at nearly twice the rate of both Noom and Weight Watchers.
Unpacking the C3.ai IPO filing — C3 is actually in pretty good financial shape, generating both growing recurring software revenues and cash in some quarters.
(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)
Moderna reports its COVID-19 vaccine is 94.5% effective in first data from Phase 3 trial — Following fast on the heels of Pfizer’s announcement of its COVID-19 vaccine efficacy, Moderna is also sharing positive results from its Phase 3 trial.
HBO Max arrives on Amazon Fire TV devices — As a part of the new deal, existing HBO subscribers on Amazon will be able to use the HBO Max app at no additional cost.
Original Content podcast: ‘The Vow’ offers a muddled look at the NXIVM cult — It’s a fascinating documentary hampered by some unfortunate storytelling choices.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
GitHub has restored the code of a project that the RIAA demanded it take down last month after finding that the group’s DMCA complaint was meritless. YouTube-dl, a tool that lets videos from the streaming site be downloaded for offline viewing, is back in action — and GitHub is changing its policy and earmarking a million dollars for a legal defense fund against future importunities.
The controversy began in mid-October when the RIAA sent a DMCA complaint to GitHub claiming that YouTube -dl violated the law no only by providing a tool for circumventing DRM, but by promoting the piracy of several popular songs in its documentation.
GitHub, like many tech companies, tends to assume the veracity of a complaint like this if it’s from a known entity like the RIAA, and it seems to have done so here, taking down YouTube-dl and publishing the complaint.
As many pointed out at the time, saying this project is a tool for circumventing DRM is like saying a tape recorder is a tool for music piracy. It’s used for far more than that, from research and accessibility purposes to integration with other apps for watch-later features and so on.
After a fork of YouTube-dl was created that lacked the references to popular YouTube videos as examples for use, the project was largely back online. But then GitHub received a letter from the internet freedom advocates at the Electronic Frontier Foundation and realized they’d been had.
As the EFF letter explains (and as the technically savvy GitHub must surely have suspected from the start), the YouTube-dl project was never in violation of the DMCA. In the first place, what the RIAA described as a suggestion in the documentation to pirate certain songs is only a test that streams a few seconds of those videos to show that the software is working — well within fair use rights.
More importantly, the RIAA misconstrues the way YouTube and YouTube-dl’s code works, mistaking a bit of code on the video site for encryption and concluding that the tool unlawfully circumvents it, violating section 1201 of the DMCA. They also refer to a court case supporting this interpretation.
In fact, as the EFF explains patiently in its letter, the code does nothing of the sort and the way YouTube-dl’s agent “watches” a video is indistinguishable to YouTube from a normal user. Everything is conducted in the clear and using no secret codes or back doors. And the court case, the EFF notes, is mistaken and at any rate German and not applicable under U.S. laws.
GitHub, perhaps feeling a bit ashamed for having folded so quickly and completely in the face of a shabbily argued nastygram from the RIAA, announced several changes to prevent such occurrences in the future.
First, all copyright claims under section 1201 — which are fundamentally dubious — will receive a technical and legal review, and an independent one if necessary, to evaluate the truth of their assertions. If the findings aren’t decisive, the project will be left up instead of taken down while the proceedings continue. Should the project seem to be in violation, they will be given a chance to amend it before takedown. And if a takedown occurs, the developers will still be able to access important data like pull requests and bug reports.
Second, GitHub is establishing a $1M developer defense fund that will be used to protect developers on the platform from bad section 1201 claims. After all, faced with the possibility of a court battle, many a poor or hobby developer will simply abandon their work, which is one of the outcomes being counted on by abusers of the DMCA.
And third, the company will be continuing its lobbying work to amend the DMCA and equivalents around the world, with a specific focus on section 1201 it plans to announce soon.
It’s a happy ending for this little saga, and while DMCA abuse is a serious and ongoing issue, at least the bullies didn’t get their way this time. Until the law changes this will continue to be an issue, but vigilance and strongly worded letters will do in the meantime.
Van Rossum, who was last employed by Dropbox, retired last October after six and a half years at the company. Clearly, that retirement wasn’t meant to last. At Microsoft, van Rossum says, he’ll work to “make using Python better for sure (and not just on Windows).”
A Microsoft spokesperson told us that the company also doesn’t have any additional details to share but confirmed that van Rossum has indeed joined Microsoft. “We’re excited to have him as part of the Developer Division. Microsoft is committed to contributing to and growing with the Python community, and Guido’s on-boarding is a reflection of that commitment,” the spokesperson said.
The Dutch programmer started working on what would become Python back in 1989. He continued to actively work on the language during his time at the U.S. National Institute of Standards and Technology in the mid-90s and at various companies afterward, including as Director of PythonLabs at BeOpen and Zope and at Elemental Security. Before going to Dropbox, he worked for Google from 2005 to 2012. There, he developed the internal code review tool Mondrian and worked on App Engine.
I decided that retirement was boring and have joined the Developer Division at Microsoft. To do what? Too many options to say! But it’ll make using Python better for sure (and not just on Windows :-). There’s lots of open source here. Watch this space.
— Guido van Rossum (@gvanrossum) November 12, 2020
Today, Python is among the most popular programming languages and the de facto standard for AI researchers, for example.
Only a few years ago, van Rossum joining Microsoft would’ve been unthinkable, given the company’s infamous approach to open source. That has clearly changed now and today’s Microsoft is one of the most active corporate open-source contributors among its peers — and now the owner of GitHub . It’s not clear what exactly van Rossum will do at Microsoft, but he notes that there’s “too many options to say” and that “there’s lots of open source here.”
Render, the winner of our Disrupt SF 2019 Startup Battlefield, today announced that it has added another $4.5 million onto its existing seed funding round, bringing total investment into the company to $6.75 million.
The round was led by General Catalyst, with participation from previous investors South Park Commons Fund and a group of angels that includes Lee Fixel, Elad Gil and GitHub CTO (and former VP of Engineering at Heroku) Jason Warner.
The company, which describes itself as a “Zero DevOps alternative to AWS, Azure and Google Cloud,” originally raised a $2.25 million seed round in April 2019, but it got a lot of inbound interest after winning the Disrupt Battlefield. In the end, though, the team decided to simply raise more money from its existing investors.
“We spoke to a bunch of people after Disrupt, including Ashton Kutcher’s firm, because he was one of the judges,” Render co-founder and CEO Anurag Goel explained. “In the end, we decided that we would just raise more money from our existing investors because we like them and it helped us get a better deal from our existing investors. And they were all super interested in continuing to invest.”
What makes Render stand out is that it fulfills many of the promises of Heroku and maybe Google Cloud’s App Engine. You simply tell it what kind of service you are going to deploy and it handles the deployment and manages the infrastructure for you.
“Our customers are all people who are writing code. And they just want to deploy this code really easily without having to worry about servers, or maintenance, or depending on DevOps teams — or, in many cases, hiring DevOps teams,” Goel said. “DevOps engineers are extremely expensive to hire and extremely hard to find, especially good ones. Our goal is to eliminate all of that work that DevOps people do at every company, because it’s very similar at every company.”
One new feature the company is launching today is preview environments. You can think of them as disposable staging or development environments that developers can spin up to test their code — and Render promises that the testing environment will look the same as your production environment (or you can specify changes, too). Developers can then test their updates collaboratively with QA or their product and sales teams in this environment.
Development teams on Render specify their infrastructure environments in a YAML file and turning on these new preview environments is as easy as setting a flag in that file.
“Once they do that, then for every pull request — because we’re integrated with GitHub and GitLab — we automatically spin up a copy of that environment. That can include anything you have in production, or things like a Redis instance, or managed Postgres database, or Elasticsearch instance, or obviously APIs and web services and static sites,” Goel said. Every time you push a change to that branch or pull request, the environment is automatically updated, too. Once the pull request is closed or merged, Render destroys the environment automatically.
The company will use the new funding to grow its team and build out its service. The plan, Goel tells me, is to raise a larger Series A round next year.
The new languages are Java, Kotlin, Scala, C/C++, Objective C, C#, Go, Typescript, HTML/CSS and Less. Kite works in most popular development environments, including the likes of VS Code, JupyterLab, Vim, Sublime and Atom, as well as all Jetbrains IntelliJ-based IDEs, including Android Studio.
This will make Kite a far more attractive solution for a lot of developers. Currently, the company says, it saves its most active developers from writing about 175 “words” of code every day. One thing that always made Kite stand out is that it ranks its suggestions by relevance — not alphabetically as some of its non-AI driven competitors do. To build its models, Kite fed its algorithms code from GitHub .
The service is available as a free download for Windows users and as a server-powered paid enterprise version with a larger deep learning model that consequently offers more AI smarts, as well as the ability to create custom models. The paid version also includes support for multi-line code completion, while the free version only supports line-of-code completions.
Kite notes that in addition to adding new languages, Kite also spent the last year focusing on the user experience, which should now be less distracting and, of course, offer more relevant completions.
Over 90% of the fastest-growing open-source companies in 2020 were founded outside the San Francisco Bay Area, and 12 out of the top 20 originate in Europe, according to a new study. The “ROSS Index”, created by Runa Capital lists the fastest-growing open-source startups with public repositories on Github every quarter.
Interestingly, the company judged to be the fastest-growing on the latest list, Plausible, is an ‘open startup’ (all its metrics are published, including revenues) and states on its website that it is “not interested in raising funds or taking investment. Not from individuals, not from institutions and not from venture capitalists. Our business model has nothing to do with collecting and analyzing huge amounts of personal information from web users and using these behavioral insights to sell advertisements.” It says it builds a self-sustainable “privacy-friendly alternative to very popular and widely used surveillance capitalism web analytics tools”.
Admittedly, ‘Github stars’ are not a totally perfect metric to measure the product-market fit of open-source companies. However, the research shows a possible interesting trend away from the VC-backed startups of the last ten years.
There have been previous attempts to create similar lists. In 2017 Battery Ventures published its own BOSS Index, but the index was abandoned. In September 2020 Accel revealed its Open100 market map, which included many open-source startups.
The high churn rates at Github mean the list of companies will change significantly every quarter. For instance, this recent finding by ROOS has only four companies that were mentioned in the previous list (Q2 2020): Hugging Face, Meili, Prisma and Framer.
Of course, open-source doesn’t mean these companies will never monetize or not go on to raise venture capital.
And Runa Capital clearly has an interest in publishing the list. It has invested in several open-source startups, including Nginx (acquired by F5 Networks for $670M), MariaDB and N8N, and recently raised a $157M fund aimed at open-source startups.
The web of collaboration apps invading remote work toolkits have led to plenty of messy workflows for teams that communicate in a language of desktop screenshots and DMs. Tracing a suggestion or flagging a bug in a company’s website forces engineers or designers to make sense of the mess themselves. While task management software has given teams a funnel for the clutter, the folks at Jam question why this functionality isn’t just built straight into the product.
Jam co-founders Dani Grant and Mohd Irtefa tell TechCrunch they’ve closed on $3.5 million in seed funding and are ready to launch a public beta of their collaboration platform which builds chat, comments and task management directly onto a website, allowing developers and designers to track issues and make suggestions quickly and simply
The seed round was led by Union Square Ventures, where co-founder Dani Grant previously worked as an analyst. Version One Ventures, BoxGroup and Village Global also participated alongside some noteworthy angels including GitHub CTO Jason Warner, Cloudflare CEO Matthew Prince, Gumroad CEO Sahil Lavingia, and former Robinhood VP Josh Elman.
Like most modern productivity suites, Jam is heavy on integrations so users aren’t forced to upend their toolkits just to add one more product into the mix. The platform supports Slack, Jira, GitHub, Asana, Loom and Figma, with a few more in the immediate pipeline. Data syncs from one platform to the other bidirectionally so information is always fresh, Grant says. It’s all built into a tidy sidebar.
Grant and Irtefa met as product managers at Cloudflare, where they started brainstorming better ways to communicate feedback in a way that felt like “leaving digital sticky notes all over a product,” Grant says. That thinking ultimately pushed the duo to leave their jobs this past May and start building Jam.
The startup, like so many conceived during this period, has a remote founding story. Grant and Irtefa have only spent four days together in-person since the company was started, they raised their seed round remotely and most of the employees have never met each other in-person.
The remote team hopes their software can help other remote teams declutter their workflows and focus on what they’re building.
“On a product team, the product is the first tab everyone opens and closes,” Grant says. “So we’re on top of your product instead of on some other platform”
Grid AI, a startup founded by the inventor of the popular open-source PyTorch Lightning project, William Falcon, that aims to help machine learning engineers more efficiently, today announced that it has raised an $18.6 million Series A funding round, which closed earlier this summer. The round was led by Index Ventures, with participation from Bain Capital Ventures and firstminute.
Falcon co-founded the company with Luis Capelo, who was previously the head of machine learning at Glossier. Unsurprisingly, the idea here is to take PyTorch Lightning, which launched about a year ago, and turn that into the core of Grid’s service. The main idea behind Lightning is to decouple the data science from the engineering.
The time argues that a few years ago, when data scientists tried to get started with deep learning, they didn’t always have the right expertise and it was hard for them to get everything right.
“Now the industry has an unhealthy aversion to deep learning because of this,” Falcon noted. “Lightning and Grid embed all those tricks into the workflow so you no longer need to be a PhD in AI nor [have] the resources of the major AI companies to get these things to work. This makes the opportunity cost of putting a simple model against a sophisticated neural network a few hours’ worth of effort instead of the months it used to take. When you use Lightning and Grid it’s hard to make mistakes. It’s like if you take a bad photo with your phone but we are the phone and make that photo look super professional AND teach you how to get there on your own.”
As Falcon noted, Grid is meant to help data scientists and other ML professionals “scale to match the workloads required for enterprise use cases.” Lightning itself can get them partially there, but Grid is meant to provide all of the services its users need to scale up their models to solve real-world problems.
What exactly that looks like isn’t quite clear yet, though. “Imagine you can find any GitHub repository out there. You get a local copy on your laptop and without making any code changes you spin up 400 GPUs on AWS — all from your laptop using either a web app or command-line-interface. That’s the Lightning “magic” applied to training and building models at scale,” Falcon said. “It is what we are already known for and has proven to be such a successful paradigm shift that all the other frameworks like Keras or TensorFlow, and companies have taken notice and have started to modify what they do to try to match what we do.”
The service is now in private beta.
With this new funding, Grid, which currently has 25 employees, plans to expand its team and strengthen its corporate offering via both Grid AI and through the open-source project. Falcon tells me that he aims to build a diverse team, not in the least because he himself is an immigrant, born in Venezuela, and a U.S. military veteran.
“I have first-hand knowledge of the extent that unethical AI can have,” he said. “As a result, we have approached hiring our current 25 employees across many backgrounds and experiences. We might be the first AI company that is not all the same Silicon Valley prototype tech-bro.”
“Lightning’s open-source traction piqued my interest when I first learned about it a year ago,” Index Ventures’ Sarah Cannon told me. “So intrigued in fact I remember rushing into a closet in Helsinki while at a conference to have the privacy needed to hear exactly what Will and Luis had built. I promptly called my colleague Bryan Offutt who met Will and Luis in SF and was impressed by the ‘elegance’ of their code. We swiftly decided to participate in their seed round, days later. We feel very privileged to be part of Grid’s journey. After investing in seed, we spent a significant amount with the team, and the more time we spent with them the more conviction we developed. Less than a year later and pre-launch, we knew we wanted to lead their Series A.”
Atlassian today announced the launch of Atlassian Ventures, a new $50 million fund that will invest into startups — and even more established companies — that are building products in the overall Atlassian ecosystem.
“As more and more customers transition to our cloud products, we are committed to supporting their journey by fostering a robust ecosystem of cloud-based apps that enhance their experience and satisfy all use cases,” Chris Hecht, Atlassian’s head of Corporate Development, writes in today’s announcement. “We are incredibly proud of the 4,200+ apps already available in our Marketplace and the integrations we already offer with popular tools like Slack, Zendesk, and GitHub . But this is no time to rest on our laurels. Atlassian Ventures will facilitate our continued investment in the best-of-breed tools and integrations our customers need to fuel the next wave of innovation and manage their work, both now and into the future.”
But it will also invest in established companies that are working to scale their businesses. Given the size of the fund, it’s maybe no surprise that the firm will partner with other VCs to make these investments. Hecht cites Atlassian’s existing investments in Zoom, Slack, InVision, process.st and Split.io as examples for this.
In addition to these two groups, the fund will also invest into members of the Atlassian Partner Program that are “looking to augment their cloud services and/or create new products that support the future of work.”
ZenHub, the popular project management solution for GitHub users, today announced the launch of its new features for automating hand-offs between teams. The idea behind Automated Workflows, as it is called, is to remove some of the manual busywork of updating multiple boards across teams when a new patch is ready to go to testing, for example (or when it fails those tests and the development team has to fix it).
As ZenHub founder and CEO Aaron Upright told me, Automated Workflows are only the first step in the company’s journey from not just being the most integrated service on GitHub but also the most automated.
Teams still struggle with the mechanics of agile project management, he noted. “Things like what frameworks to choose. How to organize their projects. You talk to small companies and teams, you talk to large companies — it’s a problem for everyone, where people don’t know if they should be Scrum, or Kanban or how to organize Sprint planning meetings.” What ZenHub wants to do is remove as many of these friction points as possible and automate them for teams.
It’s starting with the hand-off between teams because that’s one of the pain points its customers are struggling with all the time. And since teams tend to have their own projects and workspaces, the ZenHub team had to build a solution that worked across a company’s various boards.
The result is a new tool that is pretty much a drag-and-drop service that automatically creates notifications and moves items between workplaces as they move from QA to production, for example.
“It’s a way to automate work between different workspaces,” explained Upright. “And we’re really excited about this being kind of the first step in our automation journey.”
Over time, Upright expects, the team will be able to use machine learning to understand more about the connections that its users are making between teams. Using that data, its systems may be able to also recommend workflows as well.
The next part of ZenHub’s focus on automation will be a tool for managing the Sprint planning process.
“Already today’s, ZenHub is capturing things like velocity. We’re measuring that on a team by team basis. We understand the priority of issues in our workflow. What we want to be able to do is allow teams to automatically set a Sprint schedule, say, for example, every two weeks. Then, based on the velocity that we know about your team, maybe your team can accomplish 50 story points every two weeks — we want to auto-build that Sprint for you.”