#Transfer Learning | Applications
#Tensor Flow | Solutions to accelerate machine learning tasks | Prepare data | Build Machine Learning ML models | Deploy models | Run models in production and keep them performing | Multidimensional array based numeric computation (similar to NumPy) | GPU and distributed processing | Automatic differentiation | Model construction, training, and export
#Intel | AI Kit | Processors
#Udacity | Simulator
#University of Texas at Austin | Transfer Learning for Reinforcement Learning on a Physical Robot
#Google Research | Visual Transfer Learning for Robotic Manipulation
#DSpace@MIT | Visual Transfer Learning for Robotic Manipulation
#Toyota Research Institute | Robot learning techniques, coupled with diffusion models | Developing systems that can help older people continue to live independently | Robots that can learn and adapt to new tasks | Teaching systems through teleoperation | Remotely driving robot through demonstrations | Teleop device transmitting force between robot and person | Sight and force feedback to produce fuller picture of task | Force feedback | Flipping pancakes | Representing a robot visuomotor policy as conditional denoising diffusion process | Centrally accessible cloud-based system | Creating Large Behavior Models
#AnyMAL | Robot | Navigating as a wheeled quadruped | Standing upright on its hind legs, utilizing its front wheels as makeshift hands | Trained to perform practical tasks | Multimodal platform designed for last-mile delivery and logistics | GPS, LiDAR, and cameras for independent navigation | Reinforcement learning known as curiosity-driven learning | High-level sparse rewards | Independently discerning how to complete the entire task from the beginning | Learning process finely attuned to slight alterations in training environment | Potential for innovative task completion in intricate and dynamic scenarios
#Sanctuary AI | Pheonix humanoid robot | General-purpose humanoid robot | Form factor similar to an average-sized human | Carbon AI control system | Human-like intelligence | Robotic hand | Hand-eye coordination of object manipulation tasks | Haptic technology that mimics the sense of touch | AI model training | General-purpose robotics
#SEA.AI App | Detecting floating objects early | Using thermal and optical cameras to catch unsignalled craft, floating obstacles, containers, buoys, inflatables, kayaks and persons over board
#Anybotics | Workforce App | Operate ANYmal robot from device | Set up and review robot missions | Industrial Inspection
#OpenAI | Developers access to Stack Overflow technical knowledge about coding
#IDS | Industrial image processing | 3D cameras | Digital twins can distinguish color | Higher reproducible Z-accuracy | Stereo cameras: 240 mm, 455 mm | RGB sensor | Distinguishing colored objects | Improved pattern contrast on objects at long distances | Z accuracy: 0.1 mm at 1 m object distance | SDK | AI based image processing web service | AI based image analysis
#Flyability | Drones for industrial inspection and analysis | Confined space inspection | Collision and crash resistant inspection drone | 3D mapping | Volumetric measurement | Inspections of cargo ships, bridges, ports, steel mills cement factories, liquid gas tanks, nuclear facilities, city wide underground sewage systems | Ouster lidar
#Google DeepMind Technologies Limited | Creating advanced AI models and applications | Artificial intelligence systems ALOHA Unleashed and DemoStart | Helping robots perform complex tasks that require dexterous movement | Two-armed manipulation tasks | Simulations to improve real-world performance on multi-fingered robotic hand | Helping robots learn from human demonstrations | Translating images to action | High level of dexterity in bi-arm manipulation | Robot has two hands that can be teleoperated for training and data-collection | Allowing robots to learn how to perform new tasks with fewer demonstrations | Collectung demonstration data by remotely operating robot behavior | Applying diffusion method | Predicting robot actions from random noise | Helpung robot learn from data | Collaborating with DemoStart | DemoStart is helping new robots acquire dexterous behaviors in simulation | Google collaborating with Shadow Robot
#Shadow Robot Company | Humanoid robotic hand | Mimics human hand functionality and dimensions | Featuring 24 joints and 20 degrees of freedom
#Tampere University | Pneumatic touchpad | Soft touchpad sensing force, area and location of contact without electricity | Device utilises pneumatic channels | Can be used in environments such as MRI machines | Soft robots | Rehabilitation aids | Touchpad does not need electricity | It uses pneumatic channels embedded in the device for detection | Made entirely of soft silicone | 32 channels that adapt to touch | Precise enough to recognise handwritten letters | Recognizes multiple simultaneous touches | Ideal for use in devices such as MRI machines | If cancer tumours are found during MRI scan, pneumatic robot can take biopsy while patient is being scanned | Pneumatic device can be used in strong radiation or conditions where even small spark of electricity would cause serious hazard
#Neptune Labs | neptune.ai | Tracking foundation model training | Model training | Reproducing experiments | Rolling back to the last working stage of model | Transferring models across domains and teams | Monitoring parallel training jobs | Tracking jobs operating on different compute clusters | Rapidly identifying and resolving model training issues | Workflow set up to handle the most common model training scenarios | Tool to organize deep learning experiments
#UCLA | AI model analyzing medical images of diseases | Deep-learning framework | SLice Integration by Vision Transformer (SLIViT) | Analyzing retinal scan, ultrasound video, CT, MRI | Identifying potential disease-risk biomarkers | Using novel pre-training and fine-tuning method | Relying on large, accessible public data sets | NVIDIA T4 GPUs, NVIDIA V100 Tensor Core GPUs, NVIDIA CUDA used to conduct research | SLIViT makes large-scale, accurate analysis realistic | Disease biomarkers help understand disease trajectory of patients | Tailoring treatment to patients based on biomarkers found through SLIVIT | Model largely pre-trained on datasets of 2D scans | Fine-tuning model on 3D scans | Transfer learned model to identify different disease biomarkers by fine-tuning on datasets consisting of imagery from very different modalities and organs | Trained on 2D retinal scans and then fine-tuned model on MRI of liver | Helping model with downstream learnings even though different imagery domains
#Linux Foundation | LF AI & Data | Fostering open source innovation in artificial intelligence and data | Open Platform for Enterprise AI (OPEA) | Creating flexible, scalable Generative AI systems | Promoting sustainable ecosystem for open source AI solutions | Simplifying the deployment of generative AI (GenAI) systems | Standardization of Retrieval-Augmented Generation (RAG) | Supporting Linux development and open-source software projects | Linux kernel | Linus Torvalds
#UC Berkeley, CA, USA | Professor Trevor Darrell | Advancing machine intelligence | Methods for training vision models | Enabling robots to determine appropriate actions in novel situations | Approaches to make VLMs smaller and more efficient while retaining accuracy | How LLMs can be used as visual reasoning coordinators, overseeing the use of multiple task-specific models | Utilizing visual intelligence at home while preserving privacy | Focused on advancements in object detection, semantic segmentation and feature extraction techniques | Researched advanced unsupervised learning techniques and adaptive models | Researched cross-modal methods that integrate various data types | Advised SafelyYou, Nexar, SuperAnnotate. Pinterest, Tyzx, IQ Engines, Koozoo, BotSquare/Flutter, MetaMind, Trendage, Center Stage, KiwiBot, WaveOne, DeepScale, Grabango | Co-founder and President of Prompt AI
#Trossen Robotics | Pi Zero (π0) | Open-source vision-language-action model | Designed for general robotic control | Zero-shot learning | Dexterous manipulation | Aloha Kit | Single policy capable of controlling multiple types of robots without retraining | Generalist robotic learning | Pi Zero was trained on diverse robots | Pi Zero was transferred seamlessly to bimanual Aloha platform | Pi Zero executed actions in a zero-shot setting without additional fine-tuning | Pi Zero run on standard computational resources | Hardware: 12th Gen Intel(R) Core(TM) i9-12950HX | NVIDIA RTX A4500 16G | RAM 64G | OS: Ubuntu 22.04 | Dependencies: PyTorch, CUDA, Docker | PaliGemma | Pre-trained Vision-Language Model (VLM) | PaliGemma allows Pi Zero to understand scenes and follow natural language instructions | Image Encoding: Vision Transformer (ViT) to process robot camera feeds | Text Encoding: Converts natural language commands into numerical representation | Fusion: Aligns image features and text embeddings, helping model determine which objects are relevant to task | Pi Zero learns smooth motion trajectories using Flow Matching | Pi Zero learns a velocity field to model how actions should evolve over time | Pi Zero generates entire sequences of movement | Pi Zero predicts multiple future actions in one go | Pi Zero executes actions in chunks | ROS Robot Arms | Aloha Solo package | Intel RealSense cameras | Compact tripod mount | Tripod overhead camera | Ubuntu 22.04 LTS