From the Nexus series debuted in 2010 to the latest Pixel devices, Google created some of the best Android flagships, establishing itself as a major competitor for leading smartphone manufacturers like Samsung or Apple. With the new Pixel 3a and Pixel 3a XL, Google entered the mid-range market segment, promising to deliver the pure Android experience to more users.
In 2015, Google decided to take control of the manufacturing process for its line of smartphones. For half a decade, Google developed its Nexus series in collaboration with different manufacturers. The Nexus One, released in January 2010, was manufactured by HTC. In the following years, Google worked closely with Samsung, LG, Motorola, and Huawei. The Nexus series came into being to solve the major issues Android was facing: system fragmentation and slow updates. It was meant to be a point of reference for Android and provide users with the best Android experience possible.
September 2017 marked a turning point for Google when the company acquired a part of HTC’s hardware manufacturing division for $1.1 billion. Many tech experts were surprised at the time by Google’s bold move. Why take complete control over the manufacturing process? It seemed unnecessary since the manufacturing process of Nexus devices was already under the company’s control. It turns out Google wanted to make its own hardware in order to prioritize the research and development process for specific components: cameras and sensors. This approach influenced not only the development of Android but the entire mobile industry.
Today, most flagship devices have dual cameras. Some manufacturers go as far as adding penta-lens cameras to their phones: Nokia 9 PureView currently holds the record for the largest multi-lens array and Light is reportedly developing a phone with no less than nine lenses. A macro photo full of vivid details or a landscape photo is barely a challenge for the 2019 flagships. The Huawei P30 Pro, for example, has a triple lens camera setup with one telephoto lens with 5x optical zoom, one wide angle lens, and another ultra-wide angle lens. The imaging sensors are getting more and more advanced and offer optical image stabilization, optical zoom, phase detection autofocus, and even RAW image formats. Smartphone manufacturers are constantly trying to surpass each other by developing new camera features. Our enthusiasm for selfies and food pics is not the only catalyst of this race.
Starting with the first Google Pixel released in 2016, Google put a lot of emphasis on the camera. It received a score of 90 points on DxOMarkMobile, the authority in terms of camera benchmarks, and it was one of the best cameras ever seen on a mobile device. The software was optimized with the camera in mind, providing HDR+ and unlimited cloud storage for pictures on Google Photos. This last point is the key to understanding Pixel smartphones. Google is first and foremost a search engine. The mission of the company, as stated by CEO Sundar Pichai in the opening speech for Google I/O 2019, is to “organize the world’s information and make it universally accessible.” The future of search is visual and auditory. Point your smartphone’s camera at something and you’ll instantly get information about what you see in front of you. The Google Lens visual search launched in 2018 along with the Google Pixel 3 series features a live viewfinder that provides information about places and objects. It is not surprising, then, that the Google Pixel is a phone designed for photography.
High-definition images and raw images contribute to the development of computational imaging, the process of forming images with the help of algorithms. Smartphones like the Google Pixel capture a sequence of images which are overlayed in order to produce a final high-quality image. The process is called burst photography. Thanks to the advanced image processing and machine learning algorithms, the process seems instantaneous. Pixel device have a co-processor for image processing, the Pixel Visual Core. Apple’s Image Signal Processor (ISP) has the same function in iPhone devices. Burst photography enables users to create an image dataset of colossal proportions. Overall, smartphone users take millions of photos per day. The value of this data for developing image recognition algorithms and 3D-image rendering algorithms is tremendous. Such techniques made possible the recent integration of 3D images into Google search.
The latest Pixel devices mark Google’s entry into the mid-range market. Typically, Google covered the entry-level market with its affordable Android One devices and the high-end market with its own Pixel devices. Pixel 3a and Pixel 3a XL are cheaper versions of last year’s Pixel 3 flagships. Priced at $399 and $479, the new Pixel devices have a less powerful processor, namely the Qualcomm Snapdragon 670 coupled with 4GB of RAM, and lack wireless charging and water resistance. However, for the camera, no compromises were made. The devices have the same cameras as the Pixel 3 flagships, featuring the Pixel Visual Core. Google is thus making its top-rated smartphone camera accessible for a wider audience. Having a larger user base, the company can improve its image processing technology and stimulate user-generated content. More and more Android users are now encouraged to contribute to Google Maps and Search using Local Guides. There are over 50 million contributors and approximately 700k new places added every month. Photos are, of course, essential for maps and search.
The 12.2-megapixel cameras have optical image stabilization, a large sensor with a pixel size of 1.4µm, and, of course, manual controls. To keep the new Pixel devices at an attractive price point, the internal storage is limited at 64GB. There is no microSD card for expanding the storage, but unlimited cloud storage for photos is of course offered. Taking into account the company’s focus on image processing, this is not at all surprising. The camera is the most important component of the new Pixel phones and this trend is making a mark on the entire industry.