Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures.
Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.
“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”
The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.
Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.
The Pareto principle, also known as the 80-20 rule, asserts that 80% of consequences come from 20% of causes, rendering the remainder way less impactful.
Those working with data may have heard a different rendition of the 80-20 rule: A data scientist spends 80% of their time at work cleaning up messy data as opposed to doing actual analysis or generating insights. Imagine a 30-minute drive expanded to two-and-a-half hours by traffic jams, and you’ll get the picture.
As tempting as it may be to think of a future where there is a machine learning model for every business process, we do not need to tread that far right now.
While most data scientists spend more than 20% of their time at work on actual analysis, they still have to waste countless hours turning a trove of messy data into a tidy dataset ready for analysis. This process can include removing duplicate data, making sure all entries are formatted correctly and doing other preparatory work.
On average, this workflow stage takes up about 45% of the total time, a recent Anaconda survey found. An earlier poll by CrowdFlower put the estimate at 60%, and many other surveys cite figures in this range.
None of this is to say data preparation is not important. “Garbage in, garbage out” is a well-known rule in computer science circles, and it applies to data science, too. In the best-case scenario, the script will just return an error, warning that it cannot calculate the average spending per client, because the entry for customer #1527 is formatted as text, not as a numeral. In the worst case, the company will act on insights that have little to do with reality.
The real question to ask here is whether re-formatting the data for customer #1527 is really the best way to use the time of a well-paid expert. The average data scientist is paid between $95,000 and $120,000 per year, according to various estimates. Having the employee on such pay focus on mind-numbing, non-expert tasks is a waste both of their time and the company’s money. Besides, real-world data has a lifespan, and if a dataset for a time-sensitive project takes too long to collect and process, it can be outdated before any analysis is done.
What’s more, companies’ quests for data often include wasting the time of non-data-focused personnel, with employees asked to help fetch or produce data instead of working on their regular responsibilities. More than half of the data being collected by companies is often not used at all, suggesting that the time of everyone involved in the collection has been wasted to produce nothing but operational delay and the associated losses.
The data that has been collected, on the other hand, is often only used by a designated data science team that is too overworked to go through everything that is available.
The issues outlined here all play into the fact that save for the data pioneers like Google and Facebook, companies are still wrapping their heads around how to re-imagine themselves for the data-driven era. Data is pulled into huge databases and data scientists are left with a lot of cleaning to do, while others, whose time was wasted on helping fetch the data, do not benefit from it too often.
The truth is, we are still early when it comes to data transformation. The success of tech giants that put data at the core of their business models set off a spark that is only starting to take off. And even though the results are mixed for now, this is a sign that companies have yet to master thinking with data.
Data holds much value, and businesses are very much aware of it, as showcased by the appetite for AI experts in non-tech companies. Companies just have to do it right, and one of the key tasks in this respect is to start focusing on people as much as we do on AIs.
Data can enhance the operations of virtually any component within the organizational structure of any business. As tempting as it may be to think of a future where there is a machine learning model for every business process, we do not need to tread that far right now. The goal for any company looking to tap data today comes down to getting it from point A to point B. Point A is the part in the workflow where data is being collected, and point B is the person who needs this data for decision-making.
Importantly, point B does not have to be a data scientist. It could be a manager trying to figure out the optimal workflow design, an engineer looking for flaws in a manufacturing process or a UI designer doing A/B testing on a specific feature. All of these people must have the data they need at hand all the time, ready to be processed for insights.
People can thrive with data just as well as models, especially if the company invests in them and makes sure to equip them with basic analysis skills. In this approach, accessibility must be the name of the game.
Skeptics may claim that big data is nothing but an overused corporate buzzword, but advanced analytics capacities can enhance the bottom line for any company as long as it comes with a clear plan and appropriate expectations. The first step is to focus on making data accessible and easy to use and not on hauling in as much data as possible.
In other words, an all-around data culture is just as important for an enterprise as the data infrastructure.
Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.
Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”
“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.
“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”
The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.
La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”
“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.
The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.
The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.
Canadian e-commerce juggernaut Shopify this morning reported its second-quarter financial performance. Like Microsoft and Apple in the wake of their after-hours earnings reports, its shares are having a muted reaction to the better-than-expected results.
In the second quarter of 2021, Shopify reported revenues of $1.12 billion, up 57% on a year-over-year basis. The company’s subscription products grew 70% to $334.2 million, while its volume-driven merchant services drove their own top line up 52% to $785.2 million.
Investors had expected Shopify to report revenue of $1.05 billion.
Shopify also posted an enormous second-quarter profit. Indeed, from its $1.12 billion in total revenues, Shopify managed to generate $879.1 million in GAAP net income. How? The outsized profit came in part thanks to $778 million in unrealized gains related to equity investments. But even with those gains filtered out, Shopify’s adjusted net income of $284.6 million more than doubled its year-ago Q2 result of $129.4 million. Shopify’s earnings per share sans unrealized gains came to $2.24, far ahead of an expected 97 cents.
After reporting those results, Shopify shares are up less than a point.
In light of somewhat muted reactions to Big Tech earnings surpassing expectations, it’s increasingly clear that investors were anticipating that leading tech companies would trounce expectations in the second quarter; their earnings beats were largely priced-in ahead of the individual reports.
The rest of Shopify’s quarter is a series of huge figures. In the second three-month period of 2021, the company posted gross merchandise volume (GMV) of $42.2 billion, up 40% compared to the year-ago period. That was more than a billion dollars ahead of expectations. And the company’s monthly recurring revenue (MRR) grew 67% to $95.1 million in the quarter. That’s quick.
Shopify is priced like the growth will continue. Using its Q2 revenue result to generate an annual run rate for the firm, Shopify is currently valued at around 43x its present top line. That’s aggressive for a company that generates the minority of its revenues from recurring software fees, an investor favorite. Instead, investors seem content to pay what is effectively top dollar for the company’s blend of GMV-based service revenues and more traditional software incomes.
Consider the public markets bullish on the continued pace of e-commerce growth.
It will be interesting to see how BigCommerce, a Shopify competitor and fellow public company, performs when it reports earnings in early August. Shares of BigCommerce are up more than 3% today in wake of Shopify’s results. Ironic given Shopify’s relaxed market reaction to its own results? Sure, but who said the public markets are fair?
The Biden administration tripled down on its commitment to reining in powerful tech companies Tuesday, proposing committed Big Tech critic Jonathan Kanter to lead the Justice Department’s antitrust division.
Kanter is a lawyer with a long track record of representing smaller companies like Yelp in antitrust cases against Google. He currently practices law at his own firm, which specializes in advocacy for state and federal antitrust enforcement.
“Throughout his career, Kanter has also been a leading advocate and expert in the effort to promote strong and meaningful antitrust enforcement and competition policy,” the White House press release stated. Progressives celebrated the nomination as a win, though some of Biden’s new antitrust hawks have enjoyed support from both political parties.
Jonathan Kanter's nomination to lead @TheJusticeDept’s Antitrust Division is tremendous news for workers and consumers. He’s been a leader in the fight to check consolidated corporate power and strengthen competition in our markets. https://t.co/mLQACA0c4j
— Elizabeth Warren (@SenWarren) July 20, 2021
The Justice Department already has a major antitrust suit against Google in the works. The lawsuit, filed by Trump’s own Justice Department, accuses the company of “unlawfully maintaining monopolies” through anti-competitive practices in its search and search advertising businesses. If successfully confirmed, Kanter would be positioned to steer the DOJ’s big case against Google.
In a 2016 NYT op-ed, Kanter argued that Google is notorious for relying on an anti-competitive “playbook” to maintain its market dominance. Kanter pointed to Google’s long history of releasing free ad-supported products and eventually restricting competition through “discriminatory and exclusionary practices” in a given corner of the market.
Kanter is just the latest high-profile Big Tech critic that’s been elevated to a major regulatory role under Biden. Last month, Biden named fierce Amazon critic Lina Khan as FTC chair upon her confirmation to the agency. In March, Biden named another noted Big Tech critic, Columbia law professor Tim Wu, to the National Economic Council as a special assistant for tech and competition policy.
All signs point to the Biden White House gearing up for a major federal fight with Big Tech. Congress is working on a set of Big Tech bills, but in lieu of — or in tandem with — legislative reform, the White House can flex its own regulatory muscle through the FTC and DOJ.
In new comments to MSNBC, the White House confirmed that it is also “reviewing” Section 230 of the Communications Decency Act, a potent snippet of law that protects platforms from liability for user-generated content.