Probabilistic graphical models (PGMs) can efficiently represent the structure of many complex data and processes by making explicit conditional independences among random variables. We use PGMs as an adequate generative structure (inductive biases), allowing for rich integration into more complex systems.
Deep learning is a representation-learning method with special emphasis on compositionality. We use its capability of learning complex functions with end-to-end design philosophy to solve complex structured problems.
What kinds of network connectivity or organization support integration, memory, and gating in the brain? We study these questions through the aforementioned tools.