给定一个图,我们知道边,并了解节点的内容。此图如果是社交网络,其实就是表示着用户之间的朋友关系,节点内容就是用户贴在社交平台上的图片或者文本。这种图关系,也可以表示论文的标题、摘要、引用等等联系。我们的任务是希望模型能够学习到节点的表达,即能够捕获内容信息,又能够捕获图的信息。解决方案是基于贝叶斯深度学习框架,设计关系型的概率自编码器。深度模块专门负责处理每个节点的内容,毕竟深度学习能够在处理高维信息是有优势的;图模块处理节点节点之间的关系,例如引用网络以及知识图谱复杂的关系。在医疗领域,我们关注医疗监测。任务场景是:家里有小型雷达,会发射信号,设计的模型希望能够根据从病人身上反射的信号,发现病人是否按时用药、用药的次序是否正确。问题在于:用药的步骤非常复杂,需要理清顺序。基于贝叶斯深度学习概率框架方法,用深度模块处理非常高维的信号信息,用图模块对在医疗专有知识进行建模。值得一提的是,即使对于不同应用的同一模型,里面的参数具有不同的学学习方式,例如可以用MAP、贝叶斯方法直接学习参数分布。对于深度的神经网络来说,一旦有了参数分布,可以做很多事情,例如可以对预测进行不确定性的估计。另外,如果能够拿到参数分布,即使数据不足,也能获得非常鲁棒的预测。同时,模型也会更加强大,毕竟贝叶斯模型等价于无数个模型的采样。下面给出轻量级的贝叶斯的学习方法,可以用在任何的深度学习的模型或者任何的深度神经网络上面。首先明确目标:方法足够高效,可通过后向传播进行学习,并“抛弃”采样过程,同时模型能够符合直觉。我们的关键思路是:把神经网络的神经元以及参数,看成分布,而不是简单的在高维空间的点或者是向量。允许神经网络在学习的过程中进行前向传播、后向传播。因为分布是用自然参数表示,该方法命名为NPN(natural-parameter networks)。参考文献: • A survey on Bayesian deep learning. Hao Wang, Dit-Yan Yeung. ACM Computing Surveys (CSUR), 2020.• Towards Bayesian deep learning: a framework and some existing methods. Hao Wang, Dit-Yan Yeung. IEEE Transactions on Knowledge and DataEngineering (TKDE), 2016. • Collaborative deep learning for recommender systems. Hao Wang, Naiyan Wang, Dit-Yan Yeung. Twenty-First ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2015.• Collaborative recurrent autoencoder: recommend while learning to fill in the blanks. Hao Wang, Xingjian Shi, Dit-Yan Yeung. Thirtieth Annual Conference on Neural Information Processing Systems (NIPS), 2016.:• Natural parameter networks: a class of probabilistic neural networks. Hao Wang, Xingjian Shi, Dit-Yan Yeung. Thirtieth Annual Conference on Neural Information Processing Systems (NIPS), 2016.• Relational stacked denoising autoencoder for tag recommendation. Hao Wang, Xingjian Shi, Dit-Yan Yeung. Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), 2015.• Relational deep learning: A deep latent variable model for link prediction. Hao Wang, Xingjian Shi, Dit-Yan Yeung. Thirty-First AAAI Conference on Artificial Intelligence (AAAI), 2017.• Bidirectional inference networks: A class of deep Bayesian networks for health profiling. Hao Wang, Chengzhi Mao, Hao He, Mingmin Zhao, Tommi S. Jaakkola, Dina Katabi. Thirty-Third AAAI Conference on Artificial Intelligence (AAAI),2019.• Deep learning for precipitation nowcasting: A benchmark and a new model. Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung,Wai-kin Wong, and Wang-chun Woo. Thirty-First Annual Conference on Neural Information Processing Systems (NIPS), 2017.• Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung,Wai-kin Wong, Wang-chun Woo. Twenty-Ninth Annual Conference on Neural Information Processing Systems (NIPS), 2015.• Continuously indexed domain adaptation. Hao Wang*, Hao He*, Dina Katabi. Thirty-Seventh International Conference on Machine Learning (ICML),2020.• Deep graph random process for relational-thinking-based speech recognition. Hengguan Huang, Fuzhao Xue, Hao Wang, Ye Wang. Thirty-Seventh International Conference on Machine Learning (ICML), 2020.• STRODE: Stochastic boundary ordinary differential equation. Hengguan Huang, Hongfu Liu, Hao Wang, Chang Xiao, Ye Wang. Thirty-Eighth International Conference on Machine Learning (ICML), 2021.• Delving into deep imbalanced regression. Yuzhe Yang, Kaiwen Zha, Yingcong Chen, Hao Wang, Dina Katabi. Thirty-Eighth International Conference on Machine Learning (ICML), 2021.• Adversarial attacks are reversible with natural supervision. Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, Carl Vondrick. International Conference on Computer Vision (ICCV), 2021.• Assessment of medication self-administration using artificial intelligence. Mingmin Zhao*, Kreshnik Hoti*, Hao Wang, Aniruddh, Raghu, Dina Katabi. Nature Medicine, 2021.技术交流群邀请函