[1] Boston Dynamics just released the latest update for its famous
quadruped robot Spot, giving it better capability to make inspections and collect data without the need for human intervention. Called Spot Release 3.0, the
new update adds "flexible autonomy and repeatable data capture, making Spot the data collection solution you need to make inspection rounds safer and more efficient." But while this Boston Dynamics announcement isn't accompanied by a flashy video, it could have a huge impact on Spot's position in the industrial mobile robot market, where it can reduce the costs of IoT instrumentation and the risks of exposing human operators to environmental hazards.
The new update reduces the need for human guidance and intervention for Autowalk, a system that is one of the robot's main features and enables it to record and repeat paths. And Spot's pathfinding capacity has been improved to adapt to changes in its inspection paths, such as new obstacles.
Boston Dynamics has also improved Spot's data collection and processing capabilities, including the ability to take images from the same angle during Autowalk cycles and to have them processed by deep learning models running on the device or in the cloud. Another big feature of the new update is improved compatibility with cloud services from Microsoft, Amazon, and IBM. Spot's sensing capabilities can be an alternative to manual data-logging or IoT instrumentation, the installation of smart sensors on old infrastructure. This feature makes it possible to automatically integrate data collected during Spot's Autowalk into companies' broader data-based workflows. The data can be combined with other sources of information and processed with analytics and
machine learning tools for tasks such as tracking trends, detecting anomalies, and triggering warnings.
>> Read more here. [2] When it comes to
AI, algorithmic innovations are substantially more important than hardware – at least where the problems involve billions to trillions of data points. That's the conclusion of a team of scientists at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), who conducted what they claim is the first
study on how quickly algorithms are improving across a broad range of examples.
The CSAIL team, led by MIT research scientist Neil Thompson, who previously coauthored a paper showing that algorithms were approaching the limits of modern computing hardware, analyzed data from 57 computer science textbooks and more than 1,110 research papers to trace the history of
algorithmic improvements. In total, the team looked at 113 "algorithm families," or sets of algorithms that solved the same problem and had been highlighted as most important by the textbooks. For large computing problems, 43% of algorithm families had year-on-year improvements that were equal to or larger than the gains from Moore's law, the principle predicting that the speed of computers will roughly double every two years. In 14% of problems, the performance improvements vastly outpaced those that came from improved hardware, with the gains from better algorithms particularly meaningful for big data problems.
The new MIT study adds to a growing body of evidence that the size of algorithms matters less than their architectural complexity. For example, earlier this month a team of Google researchers published a
study claiming that a model much smaller than GPT-3 –
fine-tuned language net (FLAN) – bests GPT-3 by a large margin on a number of challenging benchmarks. But there are findings to the contrary, too. In 2018, OpenAI researchers released an
analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew more than 300,000 times, with a 3.5-month doubling time, exceeding the pace of Moore's law. But assuming algorithmic improvements receive greater attention in the years to come, they could solve some of the other problems associated with large language models, like environmental impact and cost.
>> Read more here. [3] Technology decision-makers are becoming more interested in synthetic data, with nearly nine in 10 (89%) believing organizations that fail to adopt synthetic data are at risk of falling behind the curve. This is according to new research by
Synthesis AI, in conjunction with
Vanson Bourne, which showed wide agreement that synthetic data will be an essential enabling technology and key to staying ahead.
AI is driven by the speed, diversity, and quality of data. However, supervised learning approaches commonly used to
train AI systems today are fundamentally limited, as humans do not scale and, more importantly, cannot label key attributes needed to enable emerging industries such as AR/VR, autonomous vehicles, robotics, and more. The survey revealed that synthetic data, or computer-generated image data that models the real world, could be a solution to the
time-consuming and cost-prohibitive nature of supervised learning. Out of the respondents knowledgeable about synthetic data technologies, 50% believe a benefit of synthetic data is overcoming limited labels provided through supervised learning/human annotation, and 82% recognize their organization is at risk when they collect "real-world" data.
Further, the report identified a lack of organizational knowledge (67%) and slow buy-in from colleagues (67%) as the most prominent entry barriers when using synthetic data. Synthetic data is just beginning its cycle of adoption and value to the enterprise, and many industries are beginning to experiment with the technology. Buy-in from colleagues and decision-makers will be critical for synthetic data to be accepted. Despite the identified barriers, more than half (59%) of decision-makers believe their industry will utilize synthetic data either independently or in combination with "real-world" data in the next five years. >>
Read more here.
No comments:
Post a Comment