A Novel Approach to Analyzing Complex Biomedical Data
Nima Chamyani
Supervisor: Wesley Schaal
|
|
|
|
---|---|---|---|
|
|
|
|
|
|
|
|
Thank You all for listening
Maximizing the mutual information
Capturing domain knowledge by learning the regularities of the node/edge attributes distributed over graph structure
Graph Convolutional Network (GCN) as a Policy Network: A policy network, in reinforcement learning terms, dictates the action to be taken at each step. Here, the actions are the addition of new nodes and edges to the graph being generated.
Reward Function: This function scores potential actions (i.e., adding a new edge or node) based on their quality.
Optimization with Policy Gradient: GCPN uses a technique called policy gradient to optimize the policy network. The idea is to increase the probability of actions that lead to higher rewards. This is done by iteratively updating the policy network's parameters to maximize the expected cumulative reward.
Autoregressive Approach: The term 'autoregressive' means that the model uses its own previous outputs as input for the next step. For GraphAF, this means generating a new edge in the graph based on the edges that were generated in the previous steps. Essentially, the graph is generated incrementally, with each new edge being influenced by the structure of the graph up to that point.
Flow Model: The 'flow' part of GraphAF refers to the concept of normalizing flows, which is a method used in machine learning to create complex probability distributions. This allows the model to learn a complex distribution over possible graphs, which can then be sampled to generate new graphs.
Sequential Generation: building the graph step by step. At each step, GraphAF proposes a new edge by predicting its two endpoints based on the current graph structure.
Jeancarlo C. Leão et. al