fbpx

What is ARKit?

ARKit, otherwise known as Apple ARKit, has been a helpful tool for AR developed for a while now. The system works with iOS on Apple’s own devices, making the application accessible to all those that use it. The ease of use and familiar iOS experience means a chance to create AR applications and experiences on iPhones and iPads. Over the past few years, the system has evolved with the new iOS updates. Developers can build worlds and rich environments with generated objects. Then, other visitors can come in and visit, get inspired, and maybe make a couple of adaptations. It has evolved from simple one-player experiences to impressive social gaming.

A large part of the success of ARKit with Apple is the compatibility with iOS and the tech in the mobile devices. It first came in with iOS 11 and has developed with the upgrades. Also, the system runs smoothly on Core A9 for detailed content and a more engaging experience. The ARKit also works with the camera, accelerometers, gyroscope, and sensors to take full advantage of what the real-world device can do to map scenes and generate augmentation.

Apple as a company is committed to augmented reality, which is clear from the ongoing upgrades of ARKit and the major improvements seen in ARKit 2. Tim Cook was keen to highlight the potential of ARKit back when Apple launched iOS 11. There was the view that AR was the future and the ongoing growth of apps and another tech shows he was correct. As you will see below, the features and capabilities have gone from strength to strength. We will look at the toolkit on offer with ARKit and its implication for the development of high-end experiences. We will also look at the evolution of the kit with the different iOS upgrades and phones. Finally, we will consider whether Apple will be the company to continue to lead the AR revolution.

 

What ARKit is and how it works.

Apple brought us ARKit in 2017 for iOS 11 and it has grown from there, with the potential for better designs for Mac users via Xcode too. It makes a lot of sense for Apple users to take advantage of this system because of the links with iOS. You are using the same tech and cameras as those playing the games and using the apps.

To better understand precisely what ARKit brings to the table in terms of app generation and augmentation, we need to go through the toolkit. There is a lot here that developers can use to make the process a lot easier. It cleans everything up and handles processing in a way that is user-friendly. At its core, this is all about the following:

  • scene understanding
  • rendering
  • tracking

We need to be able to understand the scene in front of us for a more immersive experience and cleaner execution. Better rendering makes it easier for the software to handle the placement of augmented items within that real scene. Then there is the tracking, where you can be sure of precise orientation by tracking targets and faces. When it all comes together, it allows for seamless experiences and attractive apps. So, let’s learn a little more about the toolkit that makes this possible.

 

Orientation tracking

Let’s start with the tracking systems. ARKit brings us orientation tracking via the internal sensors of the device. It knows when the phone turns and, therefore, is able to understand that the image and viewpoint change too. If you move the phone as you turn your head to see another part of the room, that has to relate to the information on the screen. The app developed then has a more precise idea of the physical orientation of a user in a spherical virtual environment. This also helps to provide a better idea of the distance between objects in a wider area.

 

World tracking.

Once you have this orientation in place, you have to add in world tracking. It isn’t enough to understand the user’s position within a world. The system also has to understand when the device moves within a physical location, such as the user taking two steps forward. It does this through the information offered by the camera of the user’s current location. This sort of information is vital to help users move around and interact with augmented reality situations more easily. You can physically walk forward and get no further forward within the app.

The complex understanding of the world around the user continues with the use of visual-inertial odometry. Here the system can record and identify features from various angles for a better idea of their position. The process with the cameras and imaging isn’t actually too dissimilar to the ophthalmological process within our own eyes. It helps with depth perception for a better idea of the relationship between objects. The more information gathered, the better the response and the impact on the augmentation.

This build-up of layers and detail is essential for a more immersive experience and a world map that is going to work. World maps have to correspond with a virtual coordinate space. There has to be that seamless link so users can place objects where they choose. ARKit is a great tool for building up this kind of landscape and helps developers each step of the way. It offers information on whether a map is not available, limited due to some issue with the adequacy or accuracy of the data, or normal enough for strong augmentation.

 

Plane detection.

Finally, we have plane detection. What this essentially means is that the system recognizes the planes within a real landscape and uses those for a more precise form of image superimposition. Say, you have a game where players can create items and place them in a room. It would look unrealistic if the object were to float or get distorted. So, it is important that ARKit can now detect variations, such as bumps and curves, for a more accurate idea of the planes and better object placement. It looks clean and immersive through the camera of the smartphone.

 

 

Features added in iOS 11.3

Once Apple upgraded with its iOS 11.3 system, we saw a new version of ARKit. This time, there was an emphasis on making improvements to the accuracy of the AR generated. In turn, this would lead to an improved experience for the user. It was pretty much ARKit as we knew it, but with a few tweaks in all the right places. One of the noticeable differences for users was the way that the resolution changed with the camera-based viewpoint. This small detail helped with the authenticity and enjoyment of the experience. Other important details of significance included the following.

 

 

Vertical planes.

When ARKit first started, everything ran purely on horizontal planes. While effective in its way for the detection, tracking, and placement of objects on flat floors and surfaces, it was also a little limiting. The addition of the vertical planes literally took the platform in a whole new direction. It opened up the space in a way that walls were physical, manipulatable objects rather than purposeless borders. The system was also better at identifying surfaces that weren’t completely flat and adapting to them.

 

Image recognition.

Then there was the new and improved image recognition in ARKit 1.5 that brought in another dimension. While the focus had primarily been on the 3D and the real faces, in order to create that perfect tracking and immersive experience, developers realized they needed a 2D aspect too. 2D tracking came in to bring facial recognition to flat images of people, such as paintings and posters. It could recognize features within this printed media instead of assuming there was nothing there. This also brought in new opportunities with created content.

 

Then came iOS 12.

The arrival of iOS 12 in 2018 brought a whole new way of using ARKit to create AR content on mobile devices. The augmentations and compatibility became more convincing and appealing with a greater focus on creating a believable world. Improvements in this system included the following.

 

Saving and loading maps

This is something that should have been a no-brainer much earlier on. With previous incarnations of ARKit, you could save your world maps across multiple devices or sessions. The app had no memory, therefore, of items that users had placed in locations on a previous session. This meant a limited sense of achievement and the knowledge that you had to make the most of a single session. However, relocalization changed all that and made the worlds more real

Then there was the MultipeerConnectivity framework where users could share worlds via AirDrop, Bluetooth, or Wi-Fi. This meant the new ability to share across devices meant that others could come and experience what you had made. The ARKit was able to understand that the shared world needed a shared scene. It was a big breakthrough into a more social experience.

In came Swiftshot as a way of taking advantage of this software.

After sorting out these issues with sharing content, Apple then brought in their AR game called Swiftshot. The basic premise here was for developers to study the way the game worked by interacting with the same objects across devices. This was where the concepts of saving maps over sessions and viewing the same scenes across devices came into its own. This also led to more persistent objects in virtual locations that were also visible when other people visited on their own devices. From there, visitors could even create items there and leave them as surprise gifts. In other words, they’d taken part in a significant breakthrough in multiplayer AR gaming.

 

Image tracking.

Another improvement in this system was ARImageTrackingConfiguration. Here there was a greater focus on the tracking of 2D images with improved potential for tracking multiple images at once in a dense tracking stage. This means warping the scene for a better comparison to a reference image. There is also the ability to generate an error image to minimize problems. In turn, this meant the chance to build on software currently using 2D tracking for a more reliable experience.

 

Object detection.

Finally, there is the updated ARKit 2 object detection system for 3D scanning. This meant that 2D and 3D imaging were both important but individual elements to create a much better experience. This new approach read real-world 3D structures as though they were world maps and used the same new dense tracking as with the 2D version. This meant the chance to create a reference object for a comparison to the real-world version. Another thing that made this new approach so interesting to developers was the level of detail and reliability for textured images. For example, they found that ARKit could recognize action figures, recreate them, and then animate them. Others used the tech to recreate detailed models of real-world cars and place them in their augmented world. It’s one way to have a Porsche on the driveway.

 

Improved face tracking.

Next, we have the improvements in facial tracking. Facial tracking is a fun part of using filters and Animojis when face timing other people. The right action can cheer up a friend or make a child laugh. But it can be alarming when the tech goes wrong and a little disappointing if the system isn’t fully responsive. That is why the ARKit 2 system worked to improve aspects of face tracking on a more detailed level. It is better able to follow the gaze, while also tracking eyes individually. This means the ability to wink with Animojis. Then, there is the tongue tracker. This might sound odd until you realize its human nature to stick out your tongue when playing with these apps. This compulsion will pay off now.

 

Environment texturing.

Finally, this more advanced ARKit 2 offers a new form of environment texturing. This new system is much smarter with a greater bank of knowledge to work from. They used an impressive neural network to expose the tech to thousands of environments. Knowledge is clearly power here because doing so gave ARKit the ability to generate a richer world map and scene for users. It had an idea of what could fit in the gaps to flesh out and smooth over the discrepancies in the scene. This intelligence also meant a greater level of accuracy in the rendering of objects.

This level of accuracy and realism is important in generating AR content because the human brain doesn’t like being fooled. We can deal with anything that is obviously stylized or fantasy as comfortably as anything that is real. But, mistakes with realistic images are jarring. It is like the strange feeling you get watching hyper-realistic image capture that just misses the mark. For example, the brain needs all the light and shadow to be in the right place, if we are consciously aware of it or not. So, an accurate environment texturing in this area is essential.

 

Why Apple is so focused on AR?

Apple seems to be keen to cement itself as a major player in the world of AR. The ongoing developments in the ARKit and its impact on app development are important for the continuation of AR on the iOS system. All of this work and synchronicity between the system and Apple devices is also a nice trial run for Apple Glass. There is talk of Apple announcing Apple Glass at some point near the end of 2020. We could then see devices on the market in a couple of years. So, it makes sense that the tech and applications are already established and reliable by then. There is no point in starting with a device and having nothing to play.

All eyes should be on Apple in the coming months for the most accessible and interesting tech on the market. But that doesn’t mean that over companies have thrown in the towel. There will always be competition here and Google and Microsoft can’t fall behind. Google may have lost out with Google Glass, but their Tango and ARCore systems for mobile AR development were both advantageous in their way. There is still room for improvement in the years ahead. Meanwhile, Microsoft may be about to take another shot with the HoloLens and a Windows Mixed reality system. So, if A is for Apple and AR, M may be for Microsoft and MR.

Bear in mind that all of this is speculation right now.

We don’t know who will emerge triumphant in this battle. Apple could ride the wave for a long time with the others trailing behind. Or, we could find that the desire for AR drops off and people turn to MR as it becomes more readily available. Either way, many continue to back Apple and hope that their investment in AR as the future of tech will pay off. The success of ARKit 1 and 2 suggests that it will.

Share this post

Share on facebook
Share on twitter
Image is not available
Image is not available
Image is not available
Image is not available
Place Augmented Reality Girls at Real World - myargirl Augmented Reality game logo - myARgirl.com.png

We Have Created a Stunning Augmented Reality (AR) iOS App.

Place AR Girls at Real World and You Will Look at Reality Differently.

Download MyARGirl app-on-the-App-Store
myargirl ios appstore screen
Slider

If you are interested in MyARGirl app please request an invitation

Fill in your email address and we will notify you when the app will be available for download

    Countdown to an Epic Launch
    Days
    Hours
    Minutes
    Seconds