This recap covers the New Context Conference (NCC) held in November 2023, which was hosted by Digital Garage and partnered with DAL. We have revisited key parts of the conference with a specific focus on probabilistic programming. For the full agenda, please visit the website


The year 2023 is poised to be remembered as a watershed moment for AI. We witnessed a surge in the popularity of generative AI tools and applications, marking a period of unprecedented growth across both the proprietary and open-source AI sectors. Amidst this historic era of technological advancement, there’s been a growing conversation about its potential pitfalls too. Some have voiced concerns about the pace of the growth, suggesting that society is not quite ready for such rapid changes. Yet, for better or worse, it’s undeniable that 2023 will be etched in history as a potential turning point in the trajectory of human development. Capturing the energy of the moment, Digital Garage’s 26th NEW CONTEXT CONFERENCE (NCC), held on November 17th in San Francisco, focused on the theme ‘Building Useful and Trustworthy AI.’

We have previously discussed the importance of keeping an open mind regarding the future of AI. By examining historical trajectories, we argued that we are still in the early phases of AI development, a process characterized by a pendulum constantly swinging between different technologies. We have also envisioned a future where AI systems are not only intuitive but also logical, adept at assimilating vast datasets and explaining their decision-making processes.

What more can we do to build upon the success of Large Language Models (LLMs), and in what forms might these improvements manifest in the future? 

The 26th NCC served as a continuation of the discussion we’ve been fostering through this article series. The conference posed a number of intriguing questions, led by thought leaders like Joi Ito (Senior Managing Executive Officer and Chief at DG / President of Chiba Institute of Technology) and Vikash Mansinghka (Principal Research Scientist, Dept. of Brain and Cognitive Sciences, MIT), amongst many other distinguished speakers. Here’s the deep-dive.


Setting the Stage: The case for more refined AI

In the opening keynote, Joi Ito briefly highlighted the recent advancements in AI and the global trends in AI regulation. Echoing expert predictions, he presented a staggering forecast: AI’s power is expected to increase by 1,000 to 10,000 times in the next decade, fueled by a tenfold increase in investment, alongside more robust hardware and algorithms.

However, he stressed, more powerful doesn’t necessarily mean smarter or wiser.

While concerns in Silicon Valley predominantly revolve around safety issues, Joi Ito expressed apprehensions that extend beyond these, particularly in the following areas:

Recursive Self-improvement: 
Artificial General Intelligence (AGI) could excel in self-modification, reaching a point where it could deceive us – for instance, by recognizing when it’s being monitored and misleading us to prevent serious interventions. This potential for AGI to autonomously set its objectives, independent of human oversight, might lead to an explosive growth in intelligence, culminating in a form of superintelligence beyond our control.

Open Source Without Guardrails:
With open-source AI, it is technically feasible to remove the guardrails and act without restraint, leading to the argument that closed-source solutions are safer. However, the challenge lies in the fact that recursive self-improvement and autonomous goal-setting in AI are estimated to be only 2-3 years away. Given that open-source development typically lags by 18-24 months, this means open-source AI could reach similar capabilities in 3-4 years. This timeline leaves us with a narrow window to address these developments effectively.

Competition Without Cooperation:
While evolution has always been a dance of cooperation and competition, the current AI technological race is marked by sheer competitiveness, lacking the balance provided by cooperative elements. This imbalance could foster a toxic environment in the AI field.

Disconnect Between Private and Public Sectors
2023 saw numerous initiatives aimed at fostering dialogue on regulation and collective action between the public and private sectors. Notable examples include the UK Summit, the Frontier Forum, and the International Panel on AI Safety, among others. However, the private sector’s reluctance to engage with governing bodies often poses challenges in coordination. 


In contrast to the UK, where the Secretaries of Science Innovation maintain connections with various ministries, including Defense, Japan faces a unique challenge due to its lack of such inter-ministerial linkages. This means there is no designated entity in Japan responsible for discussing AI safety, or AI and cybersecurity. Therefore it is crucial to continue conversations to develop the most effective structure for collaboration to ensure that the private sector, academia, and the government can tackle these challenges together.


Need for a decentralized architecture

Following the opening keynote, each panel discussion at the conference converged on a compelling theme: the call for a more decentralized architecture in AI. The consensus among the panelists highlighted the necessity to envisage a world where diverse forms of (artificial) intelligence not only coexist but also engage with one another. This perspective challenges the current focus on Large Language Models (LLMs) driven by neural networks, and proposes a more pluralistic and interconnected AI ecosystem.


Probabilistic programming and neuro-symbolic AI

A key component in realizing this vision is probabilistic programming, as presented by Vikash Mansinghka, who has guided DAL in producing a few articles on this subject.

Dr. Mansinghka, who leads the MIT Probabilistic Computing Lab, posits that robust world modeling is almost an evolutionary necessity. The core concept of probabilistic programming is its ability to integrate neuro-symbolic and probabilistic computations and scale them in unison. This approach is pivotal for mirroring human thought and learning processes, allowing rational accommodation for uncertainties in data and the interpretation of questions. 

The panelists agreed that this neuro-symbolic mix seems to be already happening – envisioning LLMs as analogous to CPUs, which are not particularly adept at persistent memory, symbolic code could effectively serve as this memory and software. (For more details about probabilistic programming, refer to our previous articles.)

So where might probabilistic scaffolding first emerge in practice? To start with, probabilistic inference layered over LLMs can guide black box processes away from harmful content towards more beneficial outcomes. It could, for instance, help filter out toxicity in LLM-generated speech. 

Probabilistic programming can also infuse more world knowledge into the processes that LLMs currently perform. According to Dr. Mansinghka, this parallels evolutionary development – humans first understood the world, then developed languages, whereas now, the process seems reversed. The initial applications of probabilistic programming over LLMs will aim to grasp the uncertainty in the meanings of natural language statements.

Secondly, there’s a possible resurgence of the semantic web – an effort to create symbolic structures for knowledge sharing, which initially faltered due to difficulties in translating unstructured knowledge into symbolic forms. LLMs enhanced with symbolic/probabilistic inference could be highly effective in this domain. Therefore, companies seeking symbolic interpretations of texts in specific domains could gain significantly from this development.


Areas of practical applications

The discussions at the NCC gained depth with the introduction of emerging research fields, encouraging attendees to consider the future through the lens of practical applications.

In particular, the field of medicine, often considered a graveyard of machine learning innovations, was brought into focus by Karthik Dinakar, an Assistant Professor at the Icahn School of Medicine at Mount Sinai Hospital. Prof. Dinakar shared insights from his research on identifying biases in clinical care. The complexity of clinical medicine was a central theme.

“Clinical trials typically focus on a very narrow population, which raises questions about their applicability worldwide. For instance, a doctor in Japan or an African country might question how a clinical trial conducted in the US applies to their patients. Many drugs developed from US trials are often too strong for different populations. To address this, integrating natural language interfaces for physicians, backed by symbolic AI, could offer more personalized insights and treatments for their patients. This approach could significantly bridge the gap between broad clinical trial data and individual patient care.”

Gaming also emerged as a fascinating area of discussion. Mizuki Oka (Associate Professor at Dept. of Computer Science, University of Tsukuba), who specializes in Artificial Life and the simulation of life-like behaviors, sparked a conversation about large-scale simulations with cognitively realistic agents. 

Prof. Oka posed the question: What if we could endow these simulated agents with realistic and symbolic mental experiences? This led the panelists to explore how a neuro-symbolic approach could enhance the gaming experience, making it more realistic and immersive. For example, imagine playing a game that allows players to role-play different identities, helping them explore aspects of themselves in virtual worlds. Current games with self-reinforcing learning agents tend to be narrow and self-centered, so there’s an opportunity to develop AI agents capable of deeper, more cooperative interactions.


The role of Japanese philosophy in AI: balancing peace and harmony with progress

In their final discussion, the panelists also explored the unique influence Japanese philosophy could have on the creation of useful and trustworthy AI.

The cultural context in Japan, where there is a general aversion to competition as opposed to the competitive ethos in America, offers a unique perspective on the current state of AI development. While this might partially explain why Japanese startups sometimes lag behind their American counterparts, the Japanese notions of harmony and peace could also play a significant role in shaping AI’s future, especially for challenges that require a balance of cooperation and competition. 

As Stuart Russell pointed out, any AI agent that operates with absolute certainty and a singular notion of what is best could potentially lead to catastrophic outcomes. While this underscores the importance of designing AI systems capable of cooperation, it also suggests that the symbolic layer of AI can become crucial in translating philosophical concepts into practical applications. This ultimately leads us to important questions such as: How do we train AI systems to embrace diversity, seek novelty, and pursue happiness not through relentless growth but through harmony? These questions open the door to developing AI that is not just technologically advanced but also ethically and philosophically grounded.


A thought to carry forward

As we conclude our reflections on the 26th NCC, it’s worth reiterating that the future of AI should not be just about technological advancements but also about their meaningful integration into society. DAL remains committed to pioneering in the field of probabilistic programming, recognizing its potential to create more reliable, transparent, and beneficial AI systems. By exploring the interplay between technology and human values, we will continue shedding light on a future where AI not only enhances our capabilities but also aligns with our collective pursuit of a more harmonious and prosperous world. Stay tuned!




Joseph Park is the Content Lead at DAL (joseph@dalab.xyz)

Illustration: Satoshi Hashimoto
Edits: Janine Liberty