Today the Android Developers Blog announced the release of Developer Preview 2 for Android Things – Google’s lightweight, Android-based OS for all Things IoT. For those looking to build smart devices, or apps to interact with them, Android Things offers a platform which provides access to Google Play Services and the Google Cloud Platform while facilitating development through the existing tools and libraries of Android Studio and the Android SDK. Utilizing a System-on-Module (SoM) architecture, developers will be able to transition computing modules from development boards to large-scale production runs by using Google’s same provided Board Support Package (BSP), the blog states.
This release includes bug fixes and a couple key additions. Repairs for issues with the Peripheral I/O (PIO) API have been made, while known Bluetooth issues have been acknowledged but not yet fixed. The team has also added support for USB audio on Intel Edison and Raspberry Pi 3 (NXP Pico already natively supports audio) as well as support for the Intel Joule Platform – the most powerful of their development boards. Improved support for developer-created drivers also makes an appearance. With this, developers can create drivers within their APK and bind it to the framework of Android Things, thereby allowing an application to evoke hardware events from within – all without modifying the kernel of Hardware Abstraction Layer (HAL). Google keeps a repository of these drivers where others can draw from and share their own, for various hardware components.
Perhaps most exciting is the added availability of two key libraries – native PIO and TensorFlow. The native PIO API’s enable developers to create C or C++ code for communicating with peripherals via Android Things. Such peripherals could include lights, door locks, or communication hubs themselves. This is a fundamental step towards Android Things’ goal of being a unifying language for IoT devices. Tensorflow, on the other hand, offers the added benefit of machine learning to these typically unintelligent devices. In this application, Tensorflow lends its neural network to the task of object recognition and image classification. An included sample demonstrates this by speaking aloud the results of this function via text-to-speech (TTS). Best of all, this functionality can be added to an app with a single line of code in the build.gradle file. With Tensorflow integrated into Android Things, security cameras using this platform can recognize people, objects, or even situations and send the user contextually appropriate alerts or initiate customized activities.
With capable in-house AI and a growing developer community in tow, what’s to stop Google from one day controlling all of your Things?
Source: Android Developers Blog
from xda-developers http://ift.tt/2ksu32p
via IFTTT
Aucun commentaire:
Enregistrer un commentaire