References:
[1] Zou, J., Yang, X., Qiu, R., Li, G., Tieu, K.,
Lu, P., ... & Yang, L. (2025). Latent Collaboration in Multi-Agent
Systems.
arXiv preprint arXiv:2511.20639.
[2] Zhou, H., Geng, H., Xue, X., Kang, L., Qin, Y.,
Wang, Z., ... & Bai, L. (2025). Reso: A reward-driven self-organizing
llm-based multi-agent system for reasoning tasks.
arXiv preprint arXiv:2503.02390.
[3] Jin, B., Zeng, H., Yue, Z., Yoon, J., Arik, S.,
Wang, D., ... & Han, J. (2025). Search-r1: Training llms to reason and
leverage search engines with reinforcement learning.
arXiv preprint arXiv:2503.09516.
[4] Pezeshkpour, P., Kandogan, E., Bhutani, N.,
Rahman, S., Mitchell, T., & Hruschka, E. (2024). Reasoning capacity in
multi-agent systems: Limitations, challenges and human-centered
solutions.
arXiv preprint arXiv:2402.01108.
[5] Kalinauskas, N. (2015, March 10). At the British
Museum, oldest recorded customer-service complaint on display.
Yahoo News.
https://ca.news.yahoo.com/blogs/daily-buzz/at-the-british-museum-oldest-recorded-184633671.html
[6] Wu, Y., et al. (2022). Memorizing Transformers.
arXiv preprint.
[7] Rein, D., Hou, B. L., Stickland, A. C., Petty,
J., Pang, R. Y., Dirani, J., ... & Bowman, S. R. (2024, July). Gpqa: A
graduate-level google-proof q&a benchmark. In
First Conference on Language Modeling.
[8] Zhang, Y., Sun, R., Chen, Y., Pfister, T.,
Zhang, R., & Arik, S. (2024). Chain of agents: Large language models
collaborating on long-context tasks.
Advances in Neural Information Processing Systems, 37,
132208-132237.