Making Root Cause Analysis Easy in AIOps TL;DR: PyRCA is an open-source machine learning library specifically designed for conducting Root Cause Analysis (RCA) in IT operations. It offers a comprehensive framework that allows users to easily identify the complicated metric causal dependencies and automatically locate the root causes of incidents. The library provides a unified interface

  

PyRCA: Making Root Cause Analysis Easy in AIOps

TL;DR: PyRCA is an open-source machine learning library specifically designed for conducting Root Cause Analysis (RCA) in IT operations. It offers a comprehensive framework that allows users to easily identify the complicated metric causal dependencies and automatically locate the root causes of incidents. The library provides a unified interface

11 JUL 2023 • Chenghao Liu • #root cause analysis

CodeGen2.5: Small, but mighty

Equal contribution between Erik Nijkamp and Hiroaki Hayashi. Paper Code Tweet Abstract The family of Salesforce CodeGen models is growing with CodeGen2.5 – a small, but mighty model! While there has been a recent trend of large language models (LLM) of increasing size, we show that a small model can

06 JUL 2023 • Erik Nijkamp • #CodeGen

Toward Actionable Generative AI

LAMs: From Large Language Models to Large Action Models There’s no question that we’re living in the era of generative AI, and its impact is only growing. More and more, AI is helping us write emails, create imagery, consume information, and even code. But as empowering as it

27 JUN 2023 • Silvio Savarese •

A Leap Forward in 3D Understanding: The ULIP and ULIP-2

TL;DR: Imagine a world where machines comprehend 3D objects just as humans do. The ULIP (CVPR2023) and ULIP-2 projects, backed by Salesforce AI, are making this a reality by revolutionizing 3D understanding. ULIP uniquely pre-trains models with 3D point clouds, images, and texts, aligning them into a unified representation

23 MAY 2023 • Le Xue •

CodeT5+: Open Code Large Language Models

TL;DR: CodeT5+ is a new family of open code large language models (LLMs) with improved model architectures and training techniques. CodeT5+ achieves the state-of-the-art performance among the open-source LLMs on many challenging code intelligence tasks, including zero-shot evaluation on the code generation benchmark HumanEval.   Background: Code LLMs Large language

20 MAY 2023 • Yue Wang • #codet5+

LogAI: A Library for Log Analytics and Intelligence

TL;DR LogAI is an open-source library designed for log analytics and intelligence. It can process raw logs generated by computer systems and support log analytics tasks such as log clustering and summarization, as well as log intelligence tasks such as log anomaly detection and root-cause analysis. LogAI is compatible

06 APR 2023 • Doyen Sahoo •

In Loving Memory of Dragomir Radev

The Salesforce AI Team is mourning the loss of our beloved friend and mentor, Dragomir Radev. Our team was first introduced to Drago in November 2018 when he gave a talk at our Research Speaker Series. His passion for research beamed through his talk and our leadership team unanimously decided

04 APR 2023 • Audrey Cook •

BotSIM: An End-to-End Automatic Evaluation Framework for Task-Oriented Dialog Systems

TL;DR: We present BotSIM, a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial task-oriented dialogue (TOD) systems. BotSIM's “generation-simulation-remediation'' paradigm can accelerate the end-to-end bot evaluation and iteration process by: (1) reducing the effort needed to create test cases; (2) enabling a better understanding of

29 NOV 2022 • Guangsen Wang • #bot simulation

Salesforce AI Research at NeurIPS 2022

Conference Overview Next week, the Thirty-sixth annual Conference on Neural Information Processing Systems (NeurIPS) will be held in New Orleans, Louisiana from Monday, November 28th, through Friday, December 9th. NeurIPS will include invited talks, demonstrations, oral and poster presentations of accepted papers. Along with the conference is a professional exposition

22 NOV 2022 • Mia Ferrer •

WarpDrive v2 Release Supports Numba to Simplify Machine Learning Workloads and Make Building Simulations Easier on NVIDIA GPUs

TL;DR: Deep reinforcement learning (RL), a powerful learning framework to train AI agents, can be slow as it requires repeated interaction with a simulation of the environment. Our original WarpDrive accelerates multi-agent deep RL on NVIDIA GPUs, enabling 10-100x speedups compared to alternative CPU+GPU implementations of multi-agent simulations.

02 NOV 2022 • Tian Lan • #WarpDrive

DeepTime: Using Deep Time-Index Meta-Learning to Improve Non-Stationary Time-Series Forecasting

TL;DR: The performance of existing time-series forecasting methods can degrade due to non-stationarity, where the statistical distribution of time-series data changes over time. Our new DeepTime method overcomes non-stationarity issues by leveraging a “forecasting as meta-learning” framework on deep time-index models. DeepTime achieves competitive accuracy on the long-sequence time-series

13 OCT 2022 • Gerald Woo • #DeepTime

Summer 2022 Salesforce Research Roundup

As we say a fond farewell to summer (bummer!), let's look back and review some of the stellar work reported on by Salesforce AI researchers during the past few months. (For more details, we encourage you to click the link for each project to read the full blog post.) --------------------------------------------------------------------------------

30 SEP 2022 • Donald Rose • #Summer 2022

Meet LAVIS: A One-stop Library for Language-Vision AI Research and Applications

TL;DR: LAVIS (short for LAnguage-VISion) is an open-source deep learning library for language-vision research and applications, offering comprehensive support for a wide range of tasks, datasets, and state-of-the-art models. Featuring a unified interface and modular design, it’s easy to use off-the-shelf and to extend with new capabilities. With

20 SEP 2022 • Dongxu Li • #LAVIS

ETSformer: Exponential Smoothing Transformers for Time-Series Forecasting

TL;DR: We developed a new time-series forecasting model called ETSformer that leverages the power of two frameworks. By combining the classical intuition of seasonal-trend decomposition and exponential smoothing with modern transformers – as well as introducing novel exponential smoothing and frequency attention mechanisms – ETSformer achieves state-of-the-art performance. Background Before diving

23 AUG 2022 • Gerald Woo • #ETSformer

AI for Global Climate Cooperation: Salesforce Research and Mila Announce Climate Change Collaboration and Competition

TL;DR:  Salesforce Research and Mila announce AI for Global Climate Cooperation, a working group collaboration and competition to design negotiation protocols and climate agreements. We plan to coauthor a peer-reviewed scientific paper with top-performing teams; insights will be distilled into a policy brief shared with leading policymakers, informing future

05 AUG 2022 • Stephan Zheng • #AI for Global Climate Cooperation

AI Coding with CodeRL: Toward Mastering Program Synthesis with Deep Reinforcement Learning

TL;DR: CodeRL is a new framework for program synthesis through holistic integration of pretrained language models and deep reinforcement learning. By utilizing unit test feedback as part of model training and inference, and integrating with an improved CodeT5 model, CodeRL achieves state-of-the-art results on competition-level programming tasks. The following

19 JUL 2022 • Henry Hung Le • #reinforcement-learning

Salesforce Research at ICML 2022

Conference Overview This weekend will kick off the thirty-ninth International Conference on Machine Learning (ICML). This conference specifically aims to bring together professionals who are dedicated to the advancement of Machine Learning (ML) in Artificial Intelligence. Participants at ICML come from many different backgrounds, including academic and industrial researchers, entrepreneurs

17 JUL 2022 • Mia Ferrer • #conferences

Salesforce Research at NAACL 2022

Conference Overview This weekend marks the start of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). NAACL provides a regional focus for members of the Association for Computational Linguistics (ACL) in North America. NAACL organizes annual conferences, promotes cooperation and information exchange among

10 JUL 2022 • Mia Ferrer • #NAACL 2022

Salesforce Research at CVPR 2022

Conference Overview The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) is the annual conference on Computer Vision. CVPR is composed of both the main conference, as well as workshops and other courses, to provide a unique learning experience and networking opportunities in the field of Computer Vision. CVPR

20 JUN 2022 • Mia Ferrer • #computer vision

TaiChi: Open Source Library for Few-Shot NLP

AUTHORS: Sharvin Shah, Jin Qu, Donald Rose TL;DR: TaiChi is an open source library for few-shot NLP, designed for data scientists and software engineers who want to get some quick results or build proof-of-concept products but don’t have much experience with few-shot learning (FSL). The library abstracts complex

15 JUN 2022 • Jin Qu • #NLP

Turbocharge Multi-Agent Reinforcement Learning with WarpDrive and PyTorch Lightning

TL;DR: WarpDrive is a flexible, lightweight, easy-to-use end-to-end reinforcement learning (RL) framework; enables orders-of-magnitude faster training on a single GPU. PyTorch Lightning enables you to modularize experimental code, and build production-ready workloads fast. Together, they can help significantly accelerate multi-agent RL R&D. Reinforcement Learning: Agents Learn by Maximizing

20 MAY 2022 • Sunil Srinivasa • #WarpDrive

Salesforce Research at ACL 2022

Conference Overview This year marks the 60th annual meeting of the Association for Computational Linguistics Conference (ACL [https://www.2022.aclweb.org/]). ACL is the premier international scientific and professional society for people working on computational problems involving human language, a field often referred to as either computational linguistics or

19 MAY 2022 • Mia Ferrer • #NLP

Science Advances Publishes AI Economist Research on Improving Tax Policies With Reinforcement Learning

TL;DR: The AI Economist, a reinforcement learning (RL) system, learns dynamic tax policies that optimize equality along with productivity in simulated economies, outperforming alternative tax systems. We have now expanded this research, which is being published in the interdisciplinary scientific journal Science Advances. Humans or AI: Which Can Design

05 MAY 2022 • Stephan Zheng • #AI Economist

Salesforce Research at ICLR 2022

Conference Overview This year marks the Tenth International Conference on Learning Representations ( ICLR [https://iclr.cc/Conferences/2022]), one of the premier academic conferences dedicated to advancing research in representation learning - a type of machine learning also referred to as feature learning or deep learning. ICLR features the latest

25 APR 2022 • Mia Ferrer • #ICLR

Conversational AI Programming with CodeGen: Let AI Write Code For You

Links: Research Paper [https://arxiv.org/abs/2203.13474], Github [https://github.com/salesforce/CodeGen] -------------------------------------------------------------------------------- Can you imagine a machine writing an app for you, just by telling it what you want? As futuristic as this scenario sounds, it’s actually here today. Salesforce AI Research outlines conversational AI