Add to Calendar
2026-01-14 11:00:00
2026-01-14 15:30:00
America/New_York
IP Boot Camp—Build Your Fortress, Plot Your Exit, from CSAIL Alliances and VMS
Join this highly-anticipated collaborative event between CSAIL Alliances and Venture Mentoring Service: MIT innovation community's essential boot camp on intellectual property, now featuring a critical new perspective on corporate engagement, acquisition, and licensing!This is more than a legal lecture, it’s a high-level masterclass for every startup founder, seasoned inventor, and mentor who wants to maximize the commercial potential of their work.https://cap.csail.mit.edu/members/events/ip-boot-camp-build-your-fortress-plot-your-exit-csail-alliances-and-vms
TBD
Events
January 13, 2026
No events scheduled
January 14, 2026
HCI Seminar - Siddhartha Prasad - Lightweight Diagramming by Spatial Specification
Siddhartha Prasad
Brown University
Add to Calendar
2026-01-14 16:00:00
2026-01-14 17:00:00
America/New_York
HCI Seminar - Siddhartha Prasad - Lightweight Diagramming by Spatial Specification
Abstract: Formal modeling tools such as Alloy enable users to incrementally define, explore, verify, and diagnose specifications for complex systems. A critical component of these tools is a visualizer that lets users graphically explore generated models. However, a default visualizer that knows nothing about the domain can be unhelpful and can even actively violate presentational and cognitive principles. At the other extreme, full-blown custom visualization requires significant effort as well as knowledge that a tool user might not possess. Custom visualizations can also exhibit bad (even silent) failures. The same needs and demands apply to programming languages, which are virtually never accompanied by data structure visualizers.We chart a middle ground between the extremes of default and fully-customizable visualization. We capture essential domain information for lightweight diagramming. To identify key elements of these diagrams, we ground the design in both cognitive science and in a corpus of custom visualizations. We distill from these sources a small set of orthogonal primitives, and use the primitives to guide a diagramming language.We show how to endow the diagramming language with a spatial semantics and prove that it enjoys key properties. We also show how it can be embedded into three very different languages: Python, Rust, and Pyret. We present a novel counterfactual debugging aid for diagramming errors, combining textual and visual output. We evaluate the language and system for expressiveness, performance, and diagnostic quality. We thus define a new point in the design space of diagramming: through a language that is lightweight, effective, and driven by cognitively sound principles.Bio:I am a PhD student in Computer Science at Brown University. My research takes a programming-languages approach to improving how people express intent and reason about program behavior, drawing on ideas from formal methods, human–computer interaction, and cognitive science. I am especially interested in how models of human cognition can inform the design of languages, semantics, and interactive tools for understanding complex computational structures.Previously, I was a software engineer at Microsoft, where I worked both on Windows and Azure. My research interests are informed by my time as an engineer. I have written code that doesn’t do what I want it to, and I want to spare everyone else the indignity.This talk will also be streamed over Zoom: https://mit.zoom.us/j/91952304653.
TBD
January 24, 2026
Add to Calendar
2026-01-24 11:00:00
2026-01-24 20:00:00
America/New_York
Snapdragon Multiverse Hackathon
Enter the Snapdragon Multiverse at MIT! This event is organized by CSAIL Alliances member Qualcomm, and hosted in the Stata Center during IAP 2026. Registration is open to anyone in the MIT community, including MIT Alumni.Please review the complete event package and rules from Qualcomm for all necessary information before registering.AboutJoin us January 24–25, 2026, for an immersive hackathon where innovation meets connectivity. Teams of 3–5 will dive into the future of multi-device communication, building experiences that redefine how devices work together.Every team will receive a Copilot+ PC powered by Snapdragon® X Series processors as the central control hub and a Samsung Galaxy S25 featuring Snapdragon 8 Elite. Want to go further? Bring your own Snapdragon-powered devices or microcontrollers to craft seamless, intelligent cross-platform solutions.From syncing sensors to orchestrating edge workflows to designing multi-screen interactions, this is your chance to prototype the next generation of connected computing within the Snapdragon ecosystem.Form your team, pick a track, and start creating. Each challenge is designed to showcase Snapdragon’s power across PCs, phones, and microcontrollers—your playground for limitless innovation.Expect hands-on support, networking, marketing, and amplification opportunities as well as prizes for the winning teams. Don’t worry – there will be swag for everyone! Note that the winners will be selected overall, not per track.Let’s come together and re-define what multi-device collaboration looks like – powered by Snapdragon.Registration InformationOne representative of the team should fill out the initial registration with full team details (3-5 people).Registration is open until January 9th – secure your spot today!Each team member must agree to the rules and regulations.Only one project proposal submission per person is allowed. TracksTrack 1: Real-time CV AssistantPlatforms: Compute, Mobile deviceAI Branch: Computer Vision, Edge AIObjective: Develop an application performing real-time computer vision analysis (object tracking, scene understanding, anomaly detection, or gesture recognition) optimized for on-device inference, optionally enhanced by cloud augmentation.Track 1 Example: License plate detection app to track cars coming in and going out of a parking lot. Track 2: Conversational AI CompanionPlatforms: Compute, Android, Cloud, or WearableAI Branch: Natural Language Processing, Generative AIObjective: Build an interactive conversational AI application capable of voice or text-based interaction, delivering real-time contextually aware responses for tasks like wellness coaching, tutoring, coding assistance, or creative storytellingTrack 2 Example: AI dungeon master that can be used to create and narrate dungeons and dragons adventures. Track 3: RL Agent ArenaPlatforms: Compute, Cloud, MicrocontrollerAI Branch: Reinforcement Learning, SimulationObjective: Build an interactive environment representing a real-world scenario and train an agent or team of agents to solve the scenario. Bots should have access to the environment state at a minimum, but models could use AI tools or data augmentation techniques to improve their performance above the baseline environmental inputs.Track 3 Example: Build a bot to dynamically handle traffic lights at an intersection in a simulated environment.Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. TimelineRegistration / Submission period: December 10 – January 9 (11:59pm)Proposal review period: January 10 – January 13Shortlisting announcement: January 14Pre-hackathon workshop / FAQ session: January 16 or 19Application submission period: January 24 (1:00pm) – January 25 (1:00pm)Application judging period: January 25 (1:01pm – 5:00pm)Winner announcement: January 25 (on or around 5:00pm)
TBD
February 10, 2026
Visual Computing Seminar: TBA
Chris Scarvelis
CSAIL
Add to Calendar
2026-02-10 12:00:00
2026-02-10 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
February 17, 2026
Visual Computing Seminar: Learning a distance measure from the information-estimation geometry of data
Flatiron Institute
Add to Calendar
2026-02-17 12:00:00
2026-02-17 13:00:00
America/New_York
Visual Computing Seminar: Learning a distance measure from the information-estimation geometry of data
Abstract:The perceptual distance between images is widely believed to be related to the distribution of natural images. But how can a probability distribution give rise to a distance measure—let alone one that aligns with human perception? What properties should such a distance satisfy, and how can it be learned from an image database in an unsupervised manner? In this talk, I will address these questions by presenting the Information–Estimation Metric (IEM), a novel form of distance function derived from a given probability density over a domain of signals. The IEM is rooted in a fundamental relationship between information theory and estimation theory, which links the log-probability of a signal with the errors of an optimal denoiser, applied to noisy observations of the signal. For Gaussian-distributed signals, the IEM coincides with the Mahalanobis distance. But for more complex distributions, it adapts, both locally and globally, to the geometry of the distribution. I will discuss and illustrate the theoretical properties of the IEM—including its global and local behavior. Finally, I will demonstrate that the IEM effectively predicts human perceptual judgments when trained (unsupervised) on natural images.Bio:Guy is a postdoctoral researcher working with Eero Simoncelli at the Flatiron Institute. His research focuses on developing computational models of human perception that are grounded in principles from information theory. He received his PhD in Computer Science from the Technion—Israel Institute of Technology, where he worked with Michael Elad and Tomer Michaeli on the design and theoretical analysis of image restoration and compression methods that rely on generative models.
TBD
February 19, 2026
Human-Machine Partnerships in Computer-Integrated Interventional Medicine: Yesterday, Today, and Tomorrow
Russell H. Taylor
Johns Hopkins University, Baltimore, MD
Add to Calendar
2026-02-19 11:00:00
2026-02-19 12:00:00
America/New_York
Human-Machine Partnerships in Computer-Integrated Interventional Medicine: Yesterday, Today, and Tomorrow
This talk will discuss insights gathered over 35 years of research on medical robotics and computer-integrated interventional medicine (CIIM), both at IBM and at Johns Hopkins University. The goal of this research has been the creation of a three-way partnership between physicians, technology, and information to improve treatment processes. CIIM systems combine innovative algorithms, robotic devices, imaging systems, sensors, and human-machine interfaces to work cooperatively with surgeons in the planning and execution of surgery and other interventionalprocedures. For individual patients, CIIM systems can enable less invasive, safer, and more cost-effective treatments. Since these systems have the ability to act as “flight data recorders” in the operating room, they can enable the use of statistical methods to improve treatment processes for future patients and to promote physician training. We will illustrate these themes with examples from our past and current work, with special attention to the human-machine partnership aspects, and will offer some thoughts about future research opportunities and system evolution.
TBD
February 24, 2026
Add to Calendar
2026-02-24 12:00:00
2026-02-24 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
March 3, 2026
Add to Calendar
2026-03-03 12:00:00
2026-03-03 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
March 17, 2026
Add to Calendar
2026-03-17 12:00:00
2026-03-17 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:
TBD
TBA
UCSD
Add to Calendar
2026-03-17 16:15:00
2026-03-17 17:15:00
America/New_York
TBA
TBA
TBD
March 31, 2026
Add to Calendar
2026-03-31 12:00:00
2026-03-31 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
April 7, 2026
Add to Calendar
2026-04-07 12:00:00
2026-04-07 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
April 14, 2026
Add to Calendar
2026-04-14 12:00:00
2026-04-14 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:
TBD
April 21, 2026
Add to Calendar
2026-04-21 12:00:00
2026-04-21 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:
TBD
April 28, 2026
Add to Calendar
2026-04-28 12:00:00
2026-04-28 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD
May 5, 2026
Add to Calendar
2026-05-05 12:00:00
2026-05-05 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:
TBD
May 12, 2026
Add to Calendar
2026-05-12 12:00:00
2026-05-12 13:00:00
America/New_York
Visual Computing Seminar: TBA
Abstract:TBA
TBD