The week in AI: Apple makes machine learning moves – TechCrunch

Image Credits: Apple

Keeping up with an industry as fast moving as AI is a tall order. So until an AI can do it for you, heres a handy roundup of the last weeks stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.

It could be said that last week, Apple very visibly and with intention, threw its hat into the ultra-competitive AI race. Its not that the company hadnt signaled its investments in and prioritization of AI previously. But at its WWDC event, Apple made it abundantly clear that AI was behind many of the features in both its forthcoming hardware and software.

For instance, iOS 17, which is set to arrive later this year, can suggest recipes for similar dishes from an iPhone photo using computer vision. AI also powers Journal, a new interactive diary that makes personalized suggestions based on activities across other apps.

iOS 17 will also feature an upgraded autocorrect powered by an AI model that can more accurately predict the next words and phrases that a user might use. Over time, itll become tailored, learning a users most frequently used words including swear words, entertainingly.

AI is central to Apples Vision Pro augmented reality headset, too specifically FaceTime on the Vision Pro. Using machine learning, the Vision Pro can create a virtual avatar of the wearer, interpolating out a full range of facial contortions down to the skin tension and muscle work.

It might not be generative AI, which is without a doubt the hottest subcategory of AI today. But Apples intention, it seems to me, was to mount a comeback of sorts to show that its not one to be underestimated after years of floundering machine learning projects, from the underwhelming Siri to the self-driving car in production hell.

Projecting strength isnt just a marketing ploy. Apples historical underperformance in AI has led to serious brain drain, reportedly, with The Information reporting that talented machine learning scientists including a team that had been working on the type of tech underlying OpenAIs ChatGPT left Apple for greener pastures.

Showing that its serious about AI by actuallyshipping products with AI imbued feels like a necessary move and a benchmark some of Apples competitors have, in fact, failed to meet in the recent past. (Heres looking at you, Meta.) By all appearances, Apple made inroads last week even if it wasnt particularly loud about it.

Here are the other AI headlines of note from the past few days:

If youre curious how AI might affect science and research over the next few years, a team across six national labs authored a report, based on workshops conducted last year, about exactly that. One may be tempted to say that, being based on trends from last year and not this one, in which things have progressed so fast, the report may already be obsolete. But while ChatGPT has made huge waves in tech and consumer awareness, the truth is its not particularly relevant for serious research. The larger-scale trends are, and theyre moving at a different pace. The 200-page report is definitely not a light read, but each section is helpfully divided into digestible pieces.

Elsewhere in the national lab ecosystem, Los Alamos researchers are hard at work on advancing the field of memristors, which combine data storage and processing much like our own neurons do. Its a fundamentally different approach to computation, though one that has yet to bear fruit outside the lab. This new approach appears to move the ball forward, at least.

AIs facility with language analysis is on display in this report on police interactions with people theyve pulled over. Natural language processing was used as one of several factors to identify linguistic patterns that predict escalation of stops especially with Black men. The human and machine learning methods reinforce each other. (Read the paper here.)

DeepBreath is a model trained on recordings of breathing taken from patients in Switzerland and Brazil that its creators at EPFL claim can help identify respiratory conditions early. The plan is to put it out there in a device called the Pneumoscope, under spinout company Onescope. Well probably follow up with them for more info on how the company is doing.

Another AI health advance comes from Purdue, where researchers have made software that approximates hyperspectral imagery with a smartphone camera, successfully tracking blood hemoglobin and other metrics. Its an interesting technique: using the phones super-slow-mo mode, it gets a lot of information about every pixel in the image, giving a model enough data to extrapolate from. It could be a great way to get this kind of health information without special hardware.

I wouldnt trust an autopilot to take evasive maneuvers just yet, but MIT is inching the tech closer with research that helps AI avoid obstacles while maintaining a desirable flight path. Any old algorithm can propose wild changes to direction in order to not crash, but doing so while maintaining stability and not pulping anything inside is harder. The team managed to get a simulated jet to perform some Top Gun-like maneuvers autonomously and without losing stability. Its harder than it sounds.

Last this week is Disney Research, which can always be counted on to show off something interesting that also just happens to apply to filmmaking or theme park operations. At CVPR they showed off a powerful and versatile facial landmark detection network that can track facial movements continuously and using more arbitrary reference points. Motion capture is already working without the little capture dots, but this should make it even higher quality and more dignified for the actors.

Read more:
The week in AI: Apple makes machine learning moves - TechCrunch

Related Posts

Comments are closed.