Apple Vision Pro "leaked" which AI capabilities of Apple?

Original: Jobs

Source: Big Model House

In the early morning of June 6th, Beijing time, the Apple WWDC 2023 Worldwide Developers Conference officially opened. In this event, the Apple Vision Pro, which debuted as "One more thing", undoubtedly became the most watched product of the event, none of them.

It is generally believed in the industry that the "Metaverse" boom has receded, and Apple's entry into the XR track is a bit late, but it turned out to be a "king bomb" product like Apple Vision Pro, which surprised the industry.

Well, the industry generally turns its attention to artificial intelligence, and Apple launched the "metaverse" device Apple Vision Pro, which makes people doubt Apple's ability in artificial intelligence.

Next, let the big model house take stock of what Apple's artificial intelligence strengths have been revealed by the brand-new Apple Vision Pro at WWDC 2023?

AIGC generates portraits

When using Apple Vision Pro for FaceTime video calls, since there is no camera facing the user, and the user is wearing an XR device, it will also make the user look very strange.

To this end, Apple scans the face information through the front lens of Apple Vision Pro. Based on machine learning technology, the system will use advanced coding neural network to generate a "digital clone" for the user and dynamically imitate the user's facial and hand movements. , and even preserve the volume and depth of the digital twin. The ease of use and effect even surpass some of the digital twin software currently on the market.

Smarter input method

As we all know, one of the most criticized dilemmas in the XR industry is the lack of input methods. Whether it is the single-key input of the handle or the input method of the floating keyboard, in terms of efficiency and accuracy, compared with the physical keyboard, the experience is inferior. very bad.

The main interaction methods of Apple Vision Pro are eyes, gestures and voice, which means that voice input may become one of the most important typing methods of Apple Vision Pro.

Although Apple did not emphasize the input method in the introduction of Apple Vision Pro, it mentioned a smarter input method in the introduction of iOS 17, which can not only correct spelling errors, but also correct the user's grammatical errors during the input process.

Autocorrected words are temporarily underlined, making it clear to users which words have been changed, and reverting to the original words with just one touch.

More importantly, based on device-side machine learning, the input method will automatically improve the model based on each user's typing. The automatic correction function of the input method has reached unprecedented accuracy.

In addition, based on the cutting-edge word prediction Transformer language model, the word association function can input the next word or even a complete sentence very quickly.

And this extremely personalized language prediction model can also allow the input method to better understand the user's language habits, and can also greatly improve the accuracy of the input when the user uses voice and input.

Brand new "Notes" APP

Along with the release of iOS 17, there is also a new "Journal" app, which can use machine learning technology on the device to create personalized memories and writing suggestions for you based on the user's photos, music, exercise and other information. Based on this information, the App will provide you with suggestions for recording and writing at the right moment for you.

This means that based on the computing power of the iPhone, the device has been able to deploy semantic understanding capabilities for localized processing of multimedia content such as text and pictures, and has certain generative AI functions.

At this time, Apple chooses to keep a low profile. In the view of Big Model House, Apple’s AI capabilities are indeed relatively weak in the face of top-level large models like GPT.

In addition, as a technology company whose main revenue comes from consumer electronics and services, compared with the relatively general concept of AI, Apple needs to emphasize the emergence of new functions to improve user experience and continue to increase user stickiness. .

Scene and action recognition ability

In addition, such as the calculation of spatial audio, the capture of eye movements and hand behaviors, these are also areas where artificial intelligence technology is exerting its strength. With the computing power provided by the M2 and R1 chips, Apple has realized artificial intelligence. The smooth localization deployment fully reflects Apple's ability to apply artificial intelligence in the field of consumer electronics.

Although Apple did not overemphasize its AI capabilities at WWDC 2023, from all aspects of product functions, its AI capabilities have penetrated into every detail of its products and become an important means to improve user experience.

As one of the most influential technology companies in the world, although Apple has not publicized its achievements in artificial intelligence, judging from Apple's blockbuster product style, its strength in the field of artificial intelligence still cannot be underestimated.

recent activities

indivual

read 987

write your message

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)