Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.
This week, a startup that’s using UAV drones for mapping forests, a look at how machine learning can map social media networks and predict Alzheimer’s, improving computer vision for space-based sensors and other news regarding recent technological advances.
Machine learning tools are being used to aid diagnosis in many ways, since they’re sensitive to patterns that humans find difficult to detect. IBM researchers have potentially found such patterns in speech that are predictive of the speaker developing Alzheimer’s disease.
The system only needs a couple minutes of ordinary speech in a clinical setting. The team used a large set of data (the Framingham Heart Study) going back to 1948, allowing patterns of speech to be identified in people who would later develop Alzheimer’s. The accuracy rate is about 71% or 0.74 area under the curve for those of you more statistically informed. That’s far from a sure thing, but current basic tests are barely better than a coin flip in predicting the disease this far ahead of time.
This is very important because the earlier Alzheimer’s can be detected, the better it can be managed. There’s no cure, but there are promising treatments and practices that can delay or mitigate the worst symptoms. A non-invasive, quick test of well people like this one could be a powerful new screening tool and is also, of course, an excellent demonstration of the usefulness of this field of tech.
(Don’t read the paper expecting to find exact symptoms or anything like that — the array of speech features aren’t really the kind of thing you can look out for in everyday life.)
Making sure your deep learning network generalizes to data outside its training environment is a key part of any serious ML research. But few attempt to set a model loose on data that’s completely foreign to it. Perhaps they should!
Researchers from Uppsala University in Sweden took a model used to identify groups and connections in social media, and applied it (not unmodified, of course) to tissue scans. The tissue had been treated so that the resultant images produced thousands of tiny dots representing mRNA.
Normally the different groups of cells, representing types and areas of tissue, would need to be manually identified and labeled. But the graph neural network, created to identify social groups based on similarities like common interests in a virtual space, proved it could perform a similar task on cells. (See the image at top.)
“We’re using the latest AI methods — specifically, graph neural networks, developed to analyze social networks — and adapting them to understand biological patterns and successive variation in tissue samples. The cells are comparable to social groupings that can be defined according to the activities they share in their social networks,” said Uppsala’s Carolina Wählby.
It’s an interesting illustration not just of the flexibility of neural networks, but of how structures and architectures repeat at all scales and in all contexts. As without, so within, if you will.
The vast forests of our national parks and timber farms have countless trees, but you can’t put “countless” on the paperwork. Someone has to make an actual estimate of how well various regions are growing, the density and types of trees, the range of disease or wildfire, and so on. This process is only partly automated, as aerial photography and scans only reveal so much, while on-the-ground observation is detailed but extremely slow and limited.
Treeswift aims to take a middle path by equipping drones with the sensors they need to both navigate and accurately measure the forest. By flying through much faster than a walking person, they can count trees, watch for problems and generally collect a ton of useful data. The company is still very early-stage, having spun out of the University of Pennsylvania and acquired an SBIR grant from the NSF.
“Companies are looking more and more to forest resources to combat climate change but you don’t have a supply of people who are growing to meet that need,” Steven Chen, co-founder and CEO of Treeswift and a doctoral student in Computer and Information Science (CIS) at Penn Engineering said in a Penn news story. “I want to help make each forester do what they do with greater efficiency. These robots will not replace human jobs. Instead, they’re providing new tools to the people who have the insight and the passion to manage our forests.”
Another area where drones are making lots of interesting moves is underwater. Oceangoing autonomous submersibles are helping map the sea floor, track ice shelves and follow whales. But they all have a bit of an Achilles’ heel in that they need to periodically be picked up, charged and their data retrieved.
Purdue engineering professor Nina Mahmoudian has created a docking system by which submersibles can easily and automatically connect for power and data exchange.
A yellow marine robot (left, underwater) finds its way to a mobile docking station to recharge and upload data before continuing a task. (Purdue University photo/Jared Pike)
The craft needs a special nosecone, which can find and plug into a station that establishes a safe connection. The station can be an autonomous watercraft itself, or a permanent feature somewhere — what matters is that the smaller craft can make a pit stop to recharge and debrief before moving on. If it’s lost (a real danger at sea), its data won’t be lost with it.
You can see the setup in action below:
Drones may soon become fixtures of city life as well, though we’re probably some ways from the automated private helicopters some seem to think are just around the corner. But living under a drone highway means constant noise — so people are always looking for ways to reduce turbulence and resultant sound from wings and propellers.
Researchers at the King Abdullah University of Science and Technology found a new, more efficient way to simulate the airflow in these situations; fluid dynamics is essentially as complex as you make it, so the trick is to apply your computing power to the right parts of the problem. They were able to render only flow near the surface of the theoretical aircraft in high resolution, finding past a certain distance there was little point knowing exactly what was happening. Improvements to models of reality don’t always need to be better in every way — after all, the results are what matter.
Computer vision algorithms have come a long way, and as their efficiency improves they are beginning to be deployed at the edge rather than at data centers. In fact it’s become fairly common for camera-bearing objects like phones and IoT devices to do some local ML work on the image. But in space it’s another story.
Performing ML work in space was until fairly recently simply too expensive power-wise to even consider. That’s power that could be used to capture another image, transmit the data to the surface, etc. HyperScout 2 is exploring the possibility of ML work in space, and its satellite has begun applying computer vision techniques immediately to the images it collects before sending them down. (“Here’s a cloud — here’s Portugal — here’s a volcano…”)
For now there’s little practical benefit, but object detection can be combined with other functions easily to create new use cases, from saving power when no objects of interest are present, to passing metadata to other tools that may work better if informed.
Machine learning models are great at making educated guesses, and in disciplines where there’s a large backlog of unsorted or poorly documented data, it can be very useful to let an AI make a first pass so that graduate students can use their time more productively. The Library of Congress is doing it with old newspapers, and now Carnegie Mellon University’s libraries are getting into the spirit.
CMU’s million-item photo archive is in the process of being digitized, but to make it useful to historians and curious browsers it needs to be organized and tagged — so computer vision algorithms are being put to work grouping similar images, identifying objects and locations, and doing other valuable basic cataloguing tasks.
“Even a partly successful project would greatly improve the collection metadata, and could provide a possible solution for metadata generation if the archives were ever funded to digitize the entire collection,” said CMU’s Matt Lincoln.
A very different project, yet one that seems somehow connected, is this work by a student at the Escola Politécnica da Universidade de Pernambuco in Brazil, who had the bright idea to try sprucing up some old maps with machine learning.
The tool they used takes old line-drawing maps and attempts to create a sort of satellite image based on them using a Generative Adversarial Network; GANs essentially attempt to trick themselves into creating content they can’t tell apart from the real thing.
Well, the results aren’t what you might call completely convincing, but it’s still promising. Such maps are rarely accurate but that doesn’t mean they’re completely abstract — recreating them in the context of modern mapping techniques is a fun idea that might help these locations seem less distant.
“The Social Dilemma” is opening eyes and changing digital lives for Netflix bingers across the globe. The filmmakers explore social media and its effects on society, raising some crucial points about impacts on mental health, politics and the myriad ways firms leverage user data. It interweaves interviews from industry executives and developers who discuss how social sites can manipulate human psychology to drive deeper engagement and time spent within the platforms.
Despite the glaring issues present with social media platforms, people still crave digital attention, especially during a pandemic, where in-person connections are strained if not impossible.
So, how can the industry change for the better? Here are three ways social media should adapt to create happier and healthier interpersonal connections and news consumption.
On most platforms, like Facebook and Instagram, the company determines some of the information presented to users. This opens the platform to manipulation by bad actors and raises questions about who exactly is dictating what information is seen and what is not. What are the motivations behind those decisions? And some of the platforms dispute their role in this process, with Mark Zuckerberg saying in 2019, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.”
Censorship can be absolved with a restructured type of social platform. For example, consider a platform that does not rely on advertiser dollars. If a social platform is free for basic users but monetized by a subscription model, there is no need to use an information-gathering algorithm to determine which news and content are served to users.
This type of platform is not a ripe target for manipulation because users only see information from people they know and trust, not advertisers or random third parties. Manipulation on major social channels happens frequently when people create zombie accounts to flood content with fake “likes” and “views” to affect the viewed content. It’s commonly exposed as a tactic for election meddling, where agents use social media to promote false statements. This type of action is a fundamental flaw of social algorithms that use AI to make decisions about when and what to censor as well as what it should promote.
The issues raised by “The Social Dilemma” should reinforce the need for social platforms to self-regulate their content and user dynamics and operate ethically. They should review their most manipulative technologies that cause isolation, depression and other issues and instead find ways to promote community, progressive action and other positive attributes.
A major change required to bring this about is to eliminate or reduce in-platform advertising. An ad-free model means the platform does not need to aggressively push unsolicited content from unsolicited sources. When ads are the main driver for a platform, then the social company has a vested interest in using every psychological and algorithm-based trick to keep the user on the platform. It’s a numbers game that puts profit over users.
More people multiplied by more time on the site equals ad exposure and ad engagement and that means revenue. An ad-free model frees a platform from trying to elicit emotional responses based on a user’s past actions, all to keep them trapped on the site, perhaps to an addictive degree.
A common form of clickbait is found on the typical social search page. A user clicks on an image or preview video that suggests a certain type of content, but upon clicking they are brought to unrelated content. It’s a technique that can be used to spread misinformation, which is especially dangerous for viewers who rely on social platforms for their news consumption, instead of traditional outlets. According to the Pew Research Center, 55% of adults get their news from social media “often” or “sometimes.” This causes a significant problem when clickbait articles make it easier to offer distorted “fake news” stories.
Unfortunately, when users engage with clickbait content, they are effectively “voting” for that information. That seemingly innocuous action creates a financial reason for others to create and disseminate further clickbait. Social media platforms should aggressively ban or limit clickbait. Management at Facebook and other firms often counter with a “free speech” argument when it comes to stopping clickbait. However, they should consider the intent is not to act as censors that are stopping controversial topics but protecting users from false content. It’s about cultivating trust and information sharing, which is much easier to accomplish when post content is backed by facts.
“The Social Dilemma” is rightfully an important film that encourages a vital dialogue about the role social media and social platforms play in everyday life. The industry needs to change to create more engaged and genuine spaces for people to connect without preying on human psychology.
A tall order, but one that should benefit both users and platforms in the long term. Social media still creates important digital connections and functions as a catalyst for positive change and discussion. It’s time for platforms to take note and take responsibility for these needed changes, and opportunities will arise for smaller, emerging platforms taking a different, less-manipulative approach.
In an overcrowded market of online fashion brands, consumers are spoilt for choice on what site to visit. They are generally forced to visit each brand one by one, manually filtering down to what they like. Most of the experience is not that great, and past purchase history and cookies aren’t much to go on to tailor user experience. If someone has bought an army-green military jacket, the e-commerce site is on a hiding to nothing if all it suggests is more army-green military jackets…
Instead, Psycke ( it’s brand name is ‘PSYKHE’) is an e-commerce startup that uses AI and psychology to make product recommendations based both on the user’s personality profile and the ‘personality’ of the products. Admittedly, a number of startups have come and gone claiming this, but it claims to have taken a unique approach to make the process of buying fashion easier by acting as an aggregator that pulls products from all leading fashion retailers. Each user sees a different storefront that, says the company, becomes increasingly personalized.
It has now raised $1.7 million in seed funding from a range of investors and is announcing new plans to scale its technology to other consumer verticals in the future in the B2B space.
The investors are Carmen Busquets – the largest founding investor in Net-a-Porter; SLS Journey – the new investment arm of the MadaLuxe Group, the North American distributor of luxury fashion; John Skipper – DAZN Chairman and former Co-chairman of Disney Media Networks and President of ESPN; and Lara Vanjak – Chief Operating Officer at Aser Ventures, formerly at MP & Silva and FC Inter-Milan.
So what does it do? As a B2C aggregator, it pools inventory from leading retailers. The platform then applies machine learning and personality-trait science, and tailors product recommendations to users based on a personality test taken on sign-up. The company says it has international patents pending and has secured affiliate partnerships with leading retailers that include Moda Operandi, MyTheresa, LVMH’s platform 24S, and 11 Honoré.
The business model is based around an affiliate partnership model, where it makes between 5-25% of each sale. It also plans to expand into B2B for other consumer verticals in the future, providing a plug-in product that allows users to sort items by their personality.
How does this personality test help? Well, Psykhe has assigned an overall psychological profile to the actual products themselves: over 1 million products from commerce partners, using machine learning (based on training data).
So for example, if a leather boot had metal studs on it (thus looking more ‘rebellious’), it would get a moderate-low rating on the trait of ‘Agreeableness’. A pink floral dress would get a higher score on that trait. A conservative tweed blazer would get a lower score tag on the trait of ‘Openness’, as tweed blazers tend to indicate a more conservative style and thus nature.
It’s competitors include The Yes and Lyst. However, Psykhe’s main point of differentiation is this personality scoring. Furthermore, The Yes is app-only, US-only, and only partners with monobrands, while Lyst is an aggregator with 1,000s of brands, but used as more of a search platform.
Psykhe is in a good position to take advantage of the ongoing effects of COVID-19, which continue to give a major boost to global ecommerce as people flood online amid lockdowns.
The startup is the brainchild of Anabel Maldonado, CEO & founder, (along with founding team CTO Will Palmer and Lead Data Scientist, Rene-Jean Corneille, pictured above), who studied psychology in her hometown of Toronto, but ended up working at in the UK’s NHS in a specialist team that made developmental diagnoses for children under 5.
She made a pivot into fashion after winning a competition for an editorial mentorship at British Marie Claire. She later went to the press department of Christian Louboutin, followed by internships at the Mail on Sunday and Marie Claire, then spending several years in magazine publishing before moving into e-commerce at CoutureLab. Going freelance, she worked with a number of luxury brands and platforms as an editorial consultant. As a fashion journalist, she’s contributed industry op-eds to publications such as The Business of Fashion, T The New York Times Style, and Marie Claire.
As part of the fashion industry for 10 years, she says she became frustrated with the narratives which “made fashion seem more frivolous than it really is. I thought, this is a trillion-dollar industry, we all have such emotional, visceral reactions to an aesthetic based on who we are, but all we keep talking about is the ‘hot new color for fall and so-called blanket “must-haves’.”
But, she says, “there was no inquiry into individual differences. This world was really missing the level of depth it deserved, and I sought to demonstrate that we’re all sensitive to aesthetic in one way or another and that our clothing choices have a great psychological pay-off effect on us, based on our unique internal needs.” So she set about creating a startup to address this ‘fashion psychology’ – or, as she says “why we wear what we wear”.
The new languages are Java, Kotlin, Scala, C/C++, Objective C, C#, Go, Typescript, HTML/CSS and Less. Kite works in most popular development environments, including the likes of VS Code, JupyterLab, Vim, Sublime and Atom, as well as all Jetbrains IntelliJ-based IDEs, including Android Studio.
This will make Kite a far more attractive solution for a lot of developers. Currently, the company says, it saves its most active developers from writing about 175 “words” of code every day. One thing that always made Kite stand out is that it ranks its suggestions by relevance — not alphabetically as some of its non-AI driven competitors do. To build its models, Kite fed its algorithms code from GitHub .
The service is available as a free download for Windows users and as a server-powered paid enterprise version with a larger deep learning model that consequently offers more AI smarts, as well as the ability to create custom models. The paid version also includes support for multi-line code completion, while the free version only supports line-of-code completions.
Kite notes that in addition to adding new languages, Kite also spent the last year focusing on the user experience, which should now be less distracting and, of course, offer more relevant completions.
Descript, the latest startup from Groupon co-founder Andrew Mason, made a splash in the world of audio last year with a platform for easy audio editing based on how you edit written documents, adding features like an AI-based tool that uses a recording of you to let you create audio of any written text in your own voice.
Today the startup is moving into the next phase of its growth. It is launching Descript Video, with a set of tools to take screen recordings or videos and then create titles, transitions, images, video overlays or edits on them with no more effort than it takes to edit a Word document. It also features live collaboration links so that multiple people can work on a file at the same time — similar to a Google Doc — by way of links that you can share with others to the file itself.
You work with video on Descript in the same way you do audio: you upload the raw material onto the Descript platform, which then turns it into text. Then you add new features, or remove sections, or add in new parts, by adding in widgets or cutting out or adding in written words.
The video tools are launching today as part of Descript’s freemium service, with basic price tiers of free, $12 and $24 per month depending on which features you take.
Descript’s launch comes at a key moment in the world of tech. Before the Covid-19 pandemic, video was already king of the content hill, thanks to advances in streaming, broadband speeds, processors on devices, a proliferation of services, and society’s inclination to lean back and watch things in their leisure time.
Yes, some people still read. And podcasts, recorded books, and other formats have definitely led to a kind of renaissance for audio. But video cuts through all of that when it comes to time spent online and consumer engagement. Like cats, it seems we’re just attracted by moving objects.
Now we have another added twist. The pandemic has become the age of video in the worlds of work, learning and play, with platforms like Zoom, Meet, Teams and WebEx taking on the role of conference room, quick coffee, dinner party, pub, and whatever other place you might have chosen to meet people before Covid-19 came along.
“We are increasingly living in a video-first world,” Mason said the other week from his house in the Bay Area, over a Zoom call. All of that means not just a ton of video, but a ton of video creators, counting not just the 50 million or so making content for Twitch, YouTube, Instagram, Snapchat and the rest, but also any one of us that is snapping a moving picture and posting it somewhere either for fun or for pay.
Video was always on the cards for Descript, Mason added, but it made sense first to focus on audio tools. That was in part because Descript itself was a spin-off from Detour (a detour from Detour, as it happens), an audio-guide business that was sold to Bose, and so sound was the focus.
“There is so much to build, so we wanted to start with some version of the product, and then add features in concentric circles of addressable markets,” said Mason.
And that essentially is how the company sees the opportunity for selling a video editing product as an extension of an audio-editing tool. People who produce content for podcasts also often produce videos, and those who got their start on a platform like YouTube are now expanding their footprints with recorded word. Sometimes there is distinct material created for one platform or the other, but oftentimes there are excerpts repurposes, or full versions of audio from video turned into podcasts.
YouTubers or podcasters, meanwhile, have something in common with the average person: everyone is using technology now to produce content, but not everyone knows how to work with it on a technical level if you need to cut, edit or manipulate it in any way.
Descript’s aimed at professionals and prosumers, but actually it also follows in the vein of tools that let people build websites without needing to know HTML or have special design experience; or use any piece of software without having to build the functionality before using it. With all of the advances in actual tech, that idea has come a long way in modern times.
“Before I got into tech I was a music major. I got a degree in music tech and worked in a recording studio. I’ve been using these tools since I was a kid and know them super well,” Mason said. “But our approach has been to think of us like Airtable. We want to be part of that modern class of SaaS products that don’t mean you need to make a tradeoff between power and ease of use.”
Tools in this first build of the video include not just the ability to import video from anywhere that you can edit, but also a screen recorder that you can use to record excerpts from other places, or indeed your whole screen, which then can either be edited as standalone items, or as part of larger works. Things like this seem particularly aimed at the new class of “video producers” that are actually knowledge workers creating material to share with colleagues or customers.
While Overdub — the feature that uses natural language processing to let you create a “deepfake” of your own voice to overlay new audio into a recording by typing something out — work very smoothly on an audio recording, where you would be hard-pressed to notice where the changes have been made, on video cuts work out as small jumps, and Overdubs simply come out as added audio in the video. While audio and video jumps are pretty commonplace these days in videos these days, I imagine that the company is likely working on a way to smooth that out to mirror the audio experience as it is today.
Descript today is used by a number of big-name content publishers, including NPR, Pushkin Industries, VICE, The Washington Post and The New York Times, although Mason declined to disclose how many users it has in total.
At some point, however, numbers will tell another kind of story: just how much traction Descript is getting among the masses of competition in the field. Platforms like Zoom and Google’s are also adding in more editing tools, and there are a plethora of others building easy to use software to better work with audio and video, from Otter.ai through to Scribe, Vimeo, Adobe, Biteable and more.
In the meantime, Descript has caught the eye of some important backers, raising some $20 million to date from investors including Andreessen Horowitz and Redpoint.
While certifications for security management practices like SOC 2 and ISO 27001 have been around for a while, the number of companies that now request that their software vendors go through (and pass) the audits to be in compliance with these continues to increase. For a lot of companies, that’s a harrowing process, so it’s maybe no surprise that we are also seeing an increase in startups that aim to make this process easier. Earlier this month, Strike Graph, which helps automate security audits, announced its $3.9 million round, and today, Secureframe, which also helps businesses get and maintain their SOC 2 and ISO 27001 certifications, is announcing a $4.5 million round.
Secureframe’s round was co-led by Base10 Partners and Google’s AI-focused Gradient Ventures fund. BoxGroup, Village Global, Soma Capital, Liquid2, Chapter One, Worklife Ventures and Backend Capital participated. Current customers include Stream, Hasura and Benepass.
Shrav Mehta, the company’s co-founder and CEO, spent time at a number of different companies, but he tells me the idea for Secureframe was mostly born during his time at direct-mail service Lob.
“When I was at Lob, we dealt with a lot of issues around security and compliance because we were sometimes dealing with very sensitive data, and we’d hop on calls with customers, had to complete thousand-line security questionnaires, do exhaustive security reviews, and this was a lot for a startup of our size at the time. But it’s just what our customers needed. So I started to see that pain,” Mehta said.
After stints at Pilot and Scale AI after he left Lob in 2017 — and informally helping other companies manage the certification process — he co-founded Secureframe together with the company’s CTO, Natasja Nielsen.
“Because Secureframe is basically adding a lot of automation with our software — and making the process so much simpler and easier — we’re able to bring the cost down to a point where this is something that a lot more companies can afford,” Mehta explained. “This is something that everyone can get in place from day one, and not really have to worry that, ‘hey, this is going to take all of our time, it’s going to take a year, it’s going to cost a lot of money.’ […] We’re trying to solve that problem to make it super easy for every organization to be secure from day one.”
The main idea here is to make the arcane certification process more transparent and streamline the process by automating many of the more labor-intensive tasks of getting ready for an audit (and it’s virtually always the pre-audit process that takes up most of the time). Secureframe does so by integrating with the most-often used cloud and SaaS tools (it currently connects to about 25 services) and pulling in data from them to check up on your security posture.
“It feels a lot like a QuickBooks or TurboTax-like experience, where we’ll essentially ask you to enter basic details about your business. We try to autofill as much of it as possible from third-party sources — then we ask you to connect up all the integrations your business uses,” Mehta explained.
The company plans to use much of the new funding to staff up and build out these integrations. Over time, it will also add support for other certifications like PCI, HITRUST and HIPAA.
Project management service Wrike today announced a major update to its platform at its user conference that includes a lot of new AI smarts for keeping individual projects on track and on time, as well as new solutions for marketers and project management offices in large corporations. In addition, the company also launched a new budgeting feature and tweaks to the overall user experience.
The highlight of the launch, though, is, without doubt, the launch of the new AI and machine learning capabilities in Wrike . With more than 20,000 customers and over 2 million users on the platform, Wrike has collected a trove of data about projects that it can use to power these machine learning models.
The way Wrike is now using AI falls into three categories: project risk prediction, task prioritization and tools for speeding up the overall project management workflow.
Figuring out the status of a project and knowing where delays could impact the overall project is often half the job. Wrike can now predict potential delays and alert project and team leaders when it sees events that signal potential issues. To do this, it uses basic information like start and end dates, but more importantly, it looks at the prior outcomes of similar projects to assess risks. Those predictions can then be fed into Wrike’s automation engine to trigger actions that could mitigate the risk to the project.
Task prioritization does what you would expect and helps you figure out what you should focus on right now to help a project move forward. No surprises there.
What is maybe more surprising is that the team is also launching voice commands (through Siri on iOS) and Gmail-like smart replies (in English for iOS and Android). Those aren’t exactly core features of a project management tools, but as the company notes, these features help remove the overall friction and reduce latencies. Another new feature that falls into this category is support for optical character recognition to allow you to scan printed and handwritten notes from your phones and attach them to tasks (iOS only).
“With more employees working from home, work and personal life are becoming intertwined,” the company argues. “As workers use AI in their personal lives, team managers and everyday users expect the smarts they’re accustomed to in consumer devices and apps to help them manage their work as well. Wrike Work Intelligence is the most comprehensive machine learning foundation that taps into tens of millions of work-related user engagements to power cross-functional collaboration to help organizations achieve operational efficiency, create new opportunities and accelerate digital transformation. Teams can focus on the work that matters most, predict and minimize delays, and cut communication latencies.”
The other major new feature — at least if you’re in digital marketing — is Wrike’s new ability to pull in data about your campaigns from about 50 advertising, marketing automation and social media tools, which is then displayed inside the Wrike experience. In a fast-moving field, having all that data at your fingertips and right inside the tool where you think about how to manage these projects seems like a smart idea.
Somewhat related, Wrike’s new budgeting feature also now makes it easier for teams to keep their projects within budget, using a new built-in rate card to manage project pricing and update their financials.
“We use Wrike for an extensive project management and performance metrics system,” said Shannon Buerk, the CEO of engage2learn, which tested this new budgeting tool. “We have tried other PM systems and have found Wrike to be the best of all worlds: easy to use for everyone and savvy enough to provide valuable reporting to inform our work. Converting all inefficiencies into productive time that moves your mission forward is one of the keys to a culture of engagement and ownership within an organization, even remotely. Wrike has helped us get there.”
As companies manufacturer goods, human inspectors review them for defects. Think of a scratch on smartphone glass or a weakness in raw steel that could have an impact downstream when it gets turned into something else. Landing AI, the company started by former Google and Baidu AI guru Andrew Ng, wants to use AI technology to identify these defects, and today the company launched a new visual inspection platform called LandingLens.
“We’re announcing LandingLens, which is an end-to-end visual inspection platform to help manufacturers build and deploy visual inspection systems [using AI],” Ng told TechCrunch.
He says that company’s goal is to bring AI to manufacturing companies, but he couldn’t simply repackage what he he had learned at Google and Baidu, partly because it involved a different set of consumer use cases, and partly because there is just much less data to work with in a manufacturing setting.
Adding to the degree of difficulty here, each setting is unique, and there is no standard playbook you can necessarily apply across each vertical. This meant Landing AI had to come up with a general tool kit that each company could use for the unique requirements of their manufacturing process.
Ng says to put this advanced technology into the hands of these customers and apply AI to visual inspection, his company has created a visual interface where companies can work through a defined process to train models to understand each customer’s inspection needs.
The way it works is you take pictures of what a good finished product looks like, and what a defective product could look like. It’s not as easy as it might sound because human experts can disagree over what constitutes a defect.
The manufacturer creates what’s called a defect book where the inspector experts work together to determine what that defect looks like via a picture, and resolve disagreements when they happen. All this is done through the LandingLens interface.
Once inspectors have agreed upon a set of labels, they can begin iterating on a model in the Model Iteration Module where the company can train and run models to get to a state of agreed upon success where the AI is picking up the defects on a regular basis. As customers run these experiments, the software generates a report on the state of the model, and customers can refine the models as needed based on the information in the report.
Ng says that his company is trying to bring in sophisticated software to help solve a big problem for manufacturing customers. “The bottleneck [for them] is building the deep learning algorithm, really the machine learning software. They can take the picture and render judgment as to whether this part is okay, or whether it is defective, and that’s what our platform helps with,” he said.
He thinks this technology could ultimately help recast how goods are manufactured in the future. “I think deep learning is poised to transform how inspection is done, which is really the key step. Inspection is really the last line of defense against quality defects in manufacturing. So I’m excited to release this platform to help manufacturers do inspections more accurately,” he said.
Every year at its MAX user conference, Adobe shows off a number of research projects that may or may not end up in its Creative Cloud apps over time. One new project that I hope we’ll soon see in its video apps is Project Sharp Shots, which will make its debut later today during the MAX Sneaks event. Powered by Adobe’s Sensei AI platform, Sharp Shots is a research project that uses AI to deblur videos.
Shubhi Gupta, the Adobe engineer behind the project, told me the idea here is to deblur a video — no matter whether it was blurred because of a shaky camera or fast movement — with a single click. In the demos she showed me, the effect was sometimes relatively subtle, as in a video of her playing ukulele, or quite dramatic, as in the example of a fast-moving motorcycle below.
With Project Sharp Shots, there’s no parameter tuning and adjustment like we used to do in our traditional methods,” she told me. “This one is just a one-click thing. It’s not magic. This is simple deep learning and AI working in the background, extracting each frame, deblurring it and producing high-quality deblurred photos and videos.”
Image Credits: AdobeGupta tells me the team looked at existing research on deblurring images and then optimized that process for moving images — and then optimized that for lower-memory usage and speed.
It’s worth noting that After Effects already offers some of these capabilities for deblurring and removing camera shake, but that’s a very different algorithm with its own set of limitations.
This new system works best when the algorithm has access to multiple related frames before and after, but it can do its job with just a handful of frames in a video.
The EU parliament has backed a call for tighter regulations on behavioral ads (aka microtargeting) in favor of less intrusive, contextual forms of advertising — urging Commission lawmakers to also assess further regulatory options, including looking at a phase-out leading to a full ban.
MEPs also want Internet users to be able to opt out of algorithmic content curation altogether.
The legislative initiative, introduced by the Legal Affairs committee, sets the parliament on a collision course with the business model of tech giants Facebook and Google.
Parliamentarians also backed a call for the Commission to look at options for setting up a European entity to monitor and impose fines to ensure compliance with rebooted digital rules — voicing support for a single, pan-EU Internet regulator to keep platforms in line.
The votes by the elected representatives of EU citizens are non-binding but send a clear signal to Commission lawmakers who are busy working on an update to existing ecommerce rules, via the forthcoming Digital Service Act (DSA) package — due to be introduced next month.
The DSA is intended to rework the regional rule book for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation. And while only the Commission can propose laws, the DSA will need to gain the backing of the EU parliament (and the Council) if it is to go the legislative distance so the executive needs to take note of MEPs’ views.
The mass surveillance of Internet users for ad targeting — a space that’s dominated by Google and Facebook — looks set to be a major battleground as Commission lawmakers draw up the DSA package.
Last month Facebook’s policy VP Nick Clegg, a former MEP himself, urged regional lawmakers to look favorably on a business model he couched as “personalized advertising” — arguing that behavioral ad targeting allows small businesses to level the playing field with better resourced rivals.
However the legality of the model remains under legal attack on multiple fronts in the EU.
Scores of complaints have been lodged with EU data protection agencies over the mass exploitation of Internet users’ data by the adtech industry since the General Data Protection Regulation (GDPR) begun being applied — with complaints raising questions over the lawfulness of the processing and the standard of consent claimed.
Just last week, a preliminary report by Belgium’s data watchdog found that a flagship tool for gathering Internet users’ consent to ad tracking that’s operated by the IAB Europe fails to meet the required GDPR standard.
The use of Internet users’ personal data in the high velocity information exchange at the core of programmatic’s advertising’s real-time-bidding (RTB) process is also being probed by Ireland’s DPC, following a series of complaints. The UK’s ICO has warned for well over a year of systemic problems with RTB too.
Meanwhile some of the oldest unresolved GDPR complaints pertain to so-called ‘forced consent’ by Facebook — given GDPR’s requirement that for consent to be lawful it must be freely given. Yet Facebook does not offer any opt-out from behavioral targeting; the ‘choice’ it offers is to use its service or not use it.
Google has also faced complaints over this issue. And last year France’s CNIL fined it $57M for not providing sufficiently clear info to Android users over how it was processing their data. But the key question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC almost 2.5 years after the original GDPR complaint was filed — meaning the clock is ticking on a decision.
And still there’s more: Facebook’s processing of EU users’ personal data in the US also faces huge legal uncertainty because of the clash between fundamental EU privacy rights and US surveillance law.
A major ruling (aka Schrems II) by Europe’s top court this summer has made it clear EU data protection agencies have an obligation to step in and suspend transfers of personal data to third countries when there’s a risk the information is not adequately protected. This led to Ireland’s DPC sending Facebook a preliminary order to suspend EU data transfers.
Facebook has used the Irish courts to get a stay on that while it seeks a judiciary review of the regulator’s process — but the overarching legal uncertainty remains. (Not least because the complainant, angry that data continues to flow, has also been granted a judicial review of the DPC’s handling of his original complaint.)
There has also been an uptick in EU class actions targeting privacy rights, as the GDPR provides a framework that litigation funders feel they can profit off of.
All this legal activity focused on EU citizens’ privacy and data rights puts pressure on Commission lawmakers not to be seen to row back standards as they shape the DSA package — with the parliament now firing its own warning shot calling for tighter restrictions on intrusive adtech.
It’s not the first such call from MEPs, either. This summer the parliament urged the Commission to “ban platforms from displaying micro-targeted advertisements and to increase transparency for users”. And while they’ve now stepped away from calling for an immediate outright ban, yesterday’s votes were preceded by more detailed discussion — as parliamentarians sought to debate in earnest with the aim of influencing what ends up in the DSA package.
Ahead of the committee votes, online ad standards body, the IAB Europe, also sought to exert influence — putting out a statement urging EU lawmakers not to increase the regulatory load on online content and services.
“A facile and indiscriminate condemnation of ‘tracking’ ignores the fact that local, generalist press whose investigative reporting holds power to account in a democratic society, cannot be funded with contextual ads alone, since these publishers do not have the resources to invest in lifestyle and other features that lend themselves to contextual targeting,” it suggested.
“Instead of adding redundant or contradictory provisions to the current rules, IAB Europe urges EU policymakers and regulators to work with the industry and support existing legal compliance standards such as the IAB Europe Transparency & Consent Framework [TCF], that can even help regulators with enforcement. The DSA should rather tackle clear problems meriting attention in the online space,” it added in the statement last month.
However, as we reported last week, the IAB Europe’s TCF has been found not to comply with existing EU standards following an investigation by the Belgium DPA’s inspectorate service — suggesting the tool offers quite the opposite of ‘model’ GDPR compliance. (Although a final decision by the DPA is pending.)
The EU parliament’s Civil Liberties committee also put forward a non-legislative resolution yesterday, focused on fundamental rights — including support for privacy and data protection — that gained MEPs’ backing.
Its resolution asserted that microtargeting based on people’s vulnerabilities is problematic, as well as raising concerns over the tech’s role as a conduit in the spreading of hate speech and disinformation.
The committee got backing for a call for greater transparency on the monetisation policies of online platforms.
Other measures MEPs supported in the series of votes yesterday included a call to set up a binding ‘notice-and-action’ mechanism so Internet users can notify online intermediaries about potentially illegal online content or activities — with the possibility of redress via a national dispute settlement body.
While MEPs rejected the use of upload filters or any form of ex-ante content control for harmful or illegal content. — saying the final decision on whether content is legal or not should be taken by an independent judiciary, not by private undertakings.
They also backed dealing with harmful content, hate speech and disinformation via enhanced transparency obligations on platforms and by helping citizens acquire media and digital literacy so they’re better able to navigate such content.
A push by the parliament’s Internal Market Committee for a ‘Know Your Business Customer’ principle to be introduced — to combat the sale of illegal and unsafe products online — also gained MEPs’ backing, with parliamentarians supporting measures to make platforms and marketplaces do a better job of detecting and taking down false claims and tackling rogue traders.
Parliamentarians also supported the introduction of specific rules to prevent (not merely remedy) market failures caused by dominant platform players as a means of opening up markets to new entrants — signalling support for the Commission’s plan to introduce ex ante rules for ‘gatekeeper’ platforms.
The parliament also backed a legislative initiative recommending rules for AI — urging Commission lawmakers to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including for software, algorithms and data.
The Commission has made it clear it’s working on such a framework, setting out a white paper this year — with a full proposal expected in 2021.
MEPs backed a requirement that ‘high-risk’ AI technologies, such as those with self-learning capacities, be designed to allow for human oversight at any time — and called for a future-oriented civil liability framework that would make those operating such tech strictly liable for any resulting damage.
The parliament agreed such rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or causes significant immaterial harm if it results in “verifiable economic loss”.
Synthetaic is a startup working to create data — specifically images — that can be used to train artificial intelligence.
Founder and CEO Corey Jaskolski’s experience includes work with both National Geographic (where he was recently named Explorer of the Year) and a 3D media startup. In fact, he told me that his time with National Geographic made him aware of the need for more data sets in conservation.
Sound like an odd match? Well, Jaskolski said that he was working on a project that could automatically identify poachers and endangered animals from camera footage, and one of the major obstacles was the fact that there simply aren’t enough existing images of either poachers (who don’t generally appreciate being photographed) or certain endangered animals in the wild to train AI to detect them.
He added that other companies are trying to create synthetic AI training data through 3D worldbuilding (in other words, “building a replica of the world that you want to have an AI learn in”), but in many cases, this approach is prohibitively expensive.
In contrast, the Synthetaic (pronounced “synthetic”) approach combines the work of 3D artists and modelers with technology based on generative adversarial networks, making it far more affordable and scalable, according to Jaskolski.
Image Credits: Synthetaic
To illustrate the “interplay” between the two halves of Synthetaic’s model, he returned to the example of identifying poachers — the startup’s 3D team could create photorealistic models of an AK-47 (and other weapons), then use adversarial networks to generate hundreds of thousands of images or more showing that model against different backgrounds.
The startup also validates its results after an AI has been trained on Synthetaic’s synthesized images, by testing that AI on real data.
For Synthetaic’s initial projects, Jaskolski said he wanted to partner with organizations doing work that makes the world a better place, including Save the Elephants (which is using the technology to track animal populations) and the University of Michigan (which is developing an AI that can identify different types of brain tumors).
Jaskolski added that Synthetaic customers don’t need any AI expertise of their own, because the company provides an “end-to-end” solution.
The startup announced today that it has raised $3.5 million in seed funding led by Lupa Systems, with participation from Betaworks Ventures and TitletownTech (a partnership between Microsoft and the Green Bay Packers). The startup, which has now raised a total of $4.5 million, is also part of Lupa and Betaworks’ Betalab program of startups doing work that could help “fix the internet.”
Amazon has launched a new program that directly pays consumers for information about what they’re purchasing outside of Amazon.com and for responding to short surveys. The program, Amazon Shopper Panel, asks users to send in 10 receipts per month for any purchases made at non-Amazon retailers, including grocery stores, department stores, drug stores and entertainment outlets (if open), like movie theaters, theme parks, and restaurants.
Amazon’s own stores, like Whole Foods, Amazon Go, Amazon Four Star and Amazon Books do not qualify.
Program participants will take advantage of the newly launched Amazon Shopper Panel mobile app on iOS and Android to take pictures of paper receipts that qualify or they can opt to forward emailed receipts to email@example.com to earn a $10 reward that can then be applied to their Amazon Balance or used as a charitable donation.
Amazon says users can then earn additional rewards each month for every survey they complete. The optional surveys will ask about brands and products that may interest the participant and how likely they are to purchase a product. Other surveys may ask what the shopper thinks of an ad. These rewards may vary, depending on the survey.
The program is currently opt-in and invite-only, and is also only open to U.S. consumers at this time. Invited participants can now download the newly launched Shopper Panel app and join the panel. Other interested users can use the app to join a waitlist for an invite.
Image Credits: Amazon
Consumer research panels are common operations, but in Amazon’s case, it plans to use the data in several different ways.
On the website, Amazon explains it “may use” customer data to improve product selection at Amazon.com and Whole Food Market, as well as to improve the content selection offered through Amazon services, like Prime Video.
Amazon also says the collected data will help advertisers better understand the relationship between their ads and product purchases at an aggregate level and will help Amazon build models about which groups of customers are likely to be interested in certain products.
And Amazon may choose to offer data to brands to help them gain feedback on existing products, the website notes.
Image Credits: Amazon
The program’s launch follows increased scrutiny over Amazon’s anti-competitive business practices in the U.S. and abroad when it comes to using consumers’ purchase data.
Amazon came under fire from U.S. regulators over how it had leveraged third-party merchants’ sales data to benefit its own private label business. When Amazon CEO Jeff Bezos testified before Congress in July, he said the company had a policy against doing this, but couldn’t confirm that policy hadn’t been violated. The retailer may also be facing antitrust charges over the practice in the E.U..
At the same time, Amazon has been increasing its investment in its advertising business, which grew by 44% year-over-year in Q1 to reach $3.91 billion. That was a faster growth rate than both Google (13%) and Facebook (17%), even if tiny by comparison — Google ads made $28 billion that quarter and Facebook made $17.4 billion, Digiday reported.
As the pandemic has accelerated the shift to e-commerce by 5 years or so, Amazon’s need to better optimize advertising space has also been sped up — and it may rapidly need to ingest more data that what it can collect directly from its own website.
In a message to advertisers about the program’s launch, Amazon positioned its e-commerce business as a small piece of the overall retail market — a point it often makes in hopes of avoiding regulation:
“In this incredibly competitive retail environment, Amazon works with brands of all sizes to help them grow their businesses not just in our store, but also across the myriad of places customers shop. We also work hard to provide our selling partners—and small businesses in particular—with tools, insights, and data to help them be successful in our store. But our store is just one piece of the puzzle. Customers routinely use Amazon to discover and learn about products before purchasing them elsewhere. In fact, Amazon only represents 4% of US retail sales. Brands therefore often look to third-party consumer panel and business intelligence firms like Nielsen and NPD, and many segment-specific data providers, for additional information. Such opt-in consumer panels are well-established and used by many companies to gather consumer feedback and shopping insights. These firms aggregate shopping behaviors across stores to report data like average sales price, total units sold, and revenue on tens of thousands of the most popular products.”
The retailer then explained that the Shopper Panel could help it to support sellers and brands by offering additional insights beyond its own store.
Amazon doesn’t say when the program waitlist will be removed, but says anyone can sign up starting today.
Microsoft today announced its plans to launch a new data center region in Austria, its first in the country. With nearby Azure regions in Switzerland, Germany, France and a planned new region in northern Italy, this part of Europe now has its fair share of Azure coverage. Microsoft also noted that it plans to launch a new ‘Center of Digital Excellence’ to Austria to “to modernize Austria’s IT infrastructure, public governmental services and industry innovation.”
In total, Azure now features 65 cloud regions — though that number includes some that aren’t online yet. As its competitors like to point out, not all of them feature multiple availability zones yet, but the company plans to change that. Until then, the fact that there’s usually another nearby region can often make up for that.
Talking about availability zones, in addition to announcing this new data center region, Microsoft also today announced plans to expand its cloud in Brazil, with new availability zones to enable high-availability workloads launching in the existing Brazil South region in 2021. Currently, this region only supports Azure workloads but will add support for Microsoft 365, Dynamics 365 and Power Platform over the course of the next few months.
This announcement is part of a large commitment to building out its presence in Brazil. Microsoft is also partnering with the Ministry of Economy “to help job matching for up to 25 million workers and is offering free digital skilling with the capacity to train up to 5.5 million people” and to use its AI to protect the rainforest. That last part may sound a bit naive, but the specific plan here is to use AI to predict likely deforestation zones based on data from satellite images.
The startup launched back in 2015 with a mission to simplify invoice management through collaboration (and a dash of AI). Interestingly, Stampli said it was uninterested at the time in providing a payments product alongside its collaborative suite, focusing instead on the process of procure to pay.
This latest announcement marks a shift in the company’s thinking. Cofounder and CEO Eyal Feldman explained that conversations with customers revealed just how frustrated many organizations are with the current B2B payments landscape.
Organizations have several options: cut and mail their own paper checks, use ACH, or sign on with a payments provider to use ‘e-payments.’
Cutting and mailing checks is a pre-historic, time-intensive activity that doesn’t really belong in 2020, while ACH (which comes at a very low, flat cost) often groups multiple transactions into a single sum, making it difficult for accounting to reconcile individual line item purchases.
“Under the misleading banner of “e-payments,” [payments providers] offer AP departments a rebate and promise vendors faster payment,” explained Feldman in a blog post. “However, in order for vendors to get the payment, they must accept payments as virtual credit cards, which come with up to a 3.5% credit card fee per transaction.”
And many payments providers do not provide the data extracted from invoices and transactions back to the organization as a way to stay sticky.
Stampli’s customers illuminated these problems for the startup, which used to be payments agnostic. With the launch of Stampli Direct Pay, the company is still payments flexible, letting organizations work with their existing or different payments providers. But Stampli now offers an option that aims to resolve many of these industry issues.
Because Stampli’s core product already tracks all the contextual and relevant info for every transaction, that information is readily available during payment approval. Direct Pay also offers ACH as a payment option, but separates individual transactions out for easy reconciliation. And for customers who want to stick with checks, Stampli Direct Pay offers a service that allows customers to approve digital checks which come directly from their bank account with their signature, with Stampli handling printing, stamping, and mailing.
Stampli also offers a vendor payment portal that extracts the needed data for each vendor and lets the customer own that data, which can be downloaded and taken to another payment provider.
The company has spent the last four years solving an entirely different problem.
Usually, teams purchase products or services and those invoices end up in the finance department with little to no context, setting off a game of duck duck goose within the organization as accountants try to get the information and approvals they need to pay out that vendor.
Stampli, which has raised $32 million to date, built out a collaborative platform that allows non-accountants to participate in the invoice management process in a way that’s straightforward and simple. Each invoice becomes a communications hub, allowing folks across various departments fill in the blanks and. answer questions about the purchase. Stampli also uses machine learning to recognize patterns around allocating costs, managing approval workflows, and the data that needs to be extracted from invoices.
Each invoice is turned into its own communications hub, allowing people across departments to fill in the blanks and answer questions so that payments are handled as efficiently as possible. Moreover, Stampli uses machine learning to recognize patterns around how the organization allocates cost, manages approval workflows and what data is extracted from invoices.
With the launch of Direct Pay, Stampli is poised to take on a variety of new competitors with an obvious differentiator. The company has processed more than $13 billion in invoices annually.
The team has also grown to more than 100 employees. Fifty-six percent of the company’s US workforce is non-white and 33 percent of the executive leadership team is female, according to Feldman.
The public sector usually publishes its business opportunities in the form of ‘tenders,’ to increase transparency to the public. However, this data is scattered, and larger businesses have access to more information, giving them opportunities to grab contracts before official tenders are released. We have seen the controversy around UK government contracts going to a number of private consultants who have questionable prior experience in the issues they are winning contracts on.
And public-to-private sector business makes up 14% of global GDP, and even a 1% improvement could save €20B for taxpayers per year, according to the European Commission .
Stotles is a new UK startup technology that turns fragmented public sector data — such as spending, tenders, contracts, meeting minutes, or news releases — into a clearer view of the market, and extracts relevant early signals about potential opportunities.
It’s now raised a £1.4m seed round led by Speedinvest, with participation from 7Percent Ventures, FJLabs, and high-profile angels including Matt Robinson, co-founder of GoCardless and CEO at Nested; Carlos Gonzalez-Cadenas, COO at Go -Cardless; Charlie Songhurst, former Head of Corporate Strategy at Microsoft; Will Neale, founder of Grabyo; and Akhil Paul. It received a previous investment from Seedcamp last year.
Stotles’ founders say they had “scathing” experiences dealing with public procurement in their previous roles at organizations like Boston Consulting Group and the World Economic Forum.
The private beta has been open for nine months, and is used by companies including UiPath, Freshworks, Rackspace, and Couchbase. With this funding announcement, they’ll be opening up an early access program.
Competitors include: Global Data, Contracts Advance, BIP Solutions, Spend Network/Open Opps, Tussel, TenderLake. However, most of the players out there are focused on tracking cold tenders, or providing contracting data for periodic generic market research.
Adobe is betting big on its Sensei AI platform, and so it’s probably no surprise that the company also continues to build more AI-powered features into its flagship Photoshop applications. At its MAX conference, Adobe today announced a handful of new AI features for Photoshop, with Sky Replacement being the most obvious example. Other new AI-driven features include new so-called “Neural Filters” that are essentially the next-generation of Photoshop filters and new and improved tools for selecting parts of images, in addition to other tools to improve on existing features or simplify the photo-editing workflow.
Photoshop isn’t the first tool to offer a Sky Replacement feature. Luminar, for example, has offered that for more than a year already, but it looks like Adobe took its time to get this one right. The idea itself is pretty straightforward: Photoshop can now automatically recognize the sky in your images and then replace it with a sky of your choosing. Because the colors of the sky also influence the overall scene, that would obviously result in a rather strange image, so Adobe’s AI also adjusts the colors of the rest of the image accordingly.
How well all of this works probably depends a bit on the images, too. We haven’t been able to give it a try ourselves, and Adobe’s demos obviously worked flawlessly.
Photoshop will ship with 25 sky replacements, but you can also bring in your own.
Neural Filters are the other highlight of this release. They provide you with new artistic and restorative filters for improving portraits, for example, or quickly replacing the background color of an image. The portrait feature will likely get the most immediate use, given that it allows you to change where people are looking, change the angle of the light source and “change hair thickness, the intensity of a smile, or add surprise, anger, or make someone older or younger.” Some of these are a bit more gimmicky than others, and Adobe says they work best for making subtle changes, but either way — making those changes would typically be a lot of manual labor, and now it’s just a click or two.
Among the other fun new filters are a style transfer tool and a filter that helps you colorize black and white images. The more useful new filters include the ability to remove JPEG artifacts.
As Adobe noted, it collaborated with Nvidia on these Neural Filters, and, while they will work on all devices running Photoshop 22.0, there’s a real performance benefit to using them on machines with built-in graphics acceleration. No surprise there, given how computationally intensive a lot of these are.
While improved object selection may not be quite as flashy as Sky Replacement and the new filters, “intelligent refine edge,” as Adobe calls it, may just save a few photo editors’ sanity. If you’ve ever tried to use Photoshop’s current tools to select a person or animal with complex hair — especially against a complex backdrop — you know how much manual intervention the current crop of tools still need. Now, with the new “Refine Hair” and “Object Aware Refine Mode,” a lot of that manual work should become unnecessary.
Other new Photoshop features include a new tool for creating patterns, a new Discover panel with improved search, help and contextual actions, faster plugins and more.
Also new is a plugin marketplace for all Creative Cloud apps that makes it easier for developers to sell their plugins.
Adobe today launched the first public version of its Illustrator vector graphics app on the iPad. That’s no surprise, given that it was already available for pre-order and as a private beta, but a lot of Illustrator users were looking forward to this day.
In addition, the company also today announced that its Fresco drawing and painting app is now available on Apple’s iPhone, too. Previously, you needed either a Windows machine or an iPad to use it.
Illustrator on the iPad supports Apple Pencil — no surprise there either — and should offer a pretty intuitive user experience for existing users. Like with Photoshop, the team adapted the user interface for a smaller screen and promises a more streamlined experience.
“While on the surface it may seem simple, more capabilities reveal themselves as you work. After a while you develop a natural rhythm where the app fades into the background, freeing you to express your creativity,” the company says.
Over time, the company plans to bring more effects, brushes and AI-powered features to Illustrator in general — including on the iPad.
As for Fresco, it’ll be interesting to see what that user experience will look like on a small screen. Since it uses Adobe’s Creative Cloud libraries, you can always start sketching on an iPhone and then move to another platform to finish your work. It’s worth noting that the iPhone version will feature the same interface, brushes and capabilities you’d expect on the other platforms.
The company also today launched version 2.0 of Fresco, with new smudge brushes, support for personalized brushes from Adobe Capture and more.