Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Sign in / Register
  • W Web-BAMT
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 9
    • Issues 9
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • itmo-sai-code
  • Web-BAMT
  • Wiki
  • About BAMT algorithms

About BAMT algorithms · Changes

Page history
Updated About BAMT algorithms (markdown) authored Sep 04, 2022 by Rimmary's avatar Rimmary
Show whitespace changes
Inline Side-by-side
About-BAMT-algorithms.md
View page @ c5fc3c87
...@@ -27,3 +27,8 @@ One of the extensions to this basic model available in Web BAMT is the applicati ...@@ -27,3 +27,8 @@ One of the extensions to this basic model available in Web BAMT is the applicati
BIC and AIC criterion-based approach is used to determine the number of mixture components. Parameter learning of such a model is also done by the method of likelihood maximization. Due to the large number of unknowns, an EM algorithm is used to find the model parameters, which consists of two steps: an estimation step in which we estimate the posterior probabilities of the mixture, and a maximization step in which we recalculate the mixture parameters to maximize the posterior probabilities. BIC and AIC criterion-based approach is used to determine the number of mixture components. Parameter learning of such a model is also done by the method of likelihood maximization. Due to the large number of unknowns, an EM algorithm is used to find the model parameters, which consists of two steps: an estimation step in which we estimate the posterior probabilities of the mixture, and a maximization step in which we recalculate the mixture parameters to maximize the posterior probabilities.
Another available modification is including classification models, in this implementation such models are represented by logistic regression (logit). In general, Bayesian network structure learning methods allow continuous variables to be parents of discrete variables. The main problem arises at the parameter learning stage, as such parent-child pairs imply a model that estimates a discrete distribution with respect to some continuous data. However, these types of relationships can be specified expertly and ignoring them may affect the quality and interpretability of the model. Another available modification is including classification models, in this implementation such models are represented by logistic regression (logit). In general, Bayesian network structure learning methods allow continuous variables to be parents of discrete variables. The main problem arises at the parameter learning stage, as such parent-child pairs imply a model that estimates a discrete distribution with respect to some continuous data. However, these types of relationships can be specified expertly and ignoring them may affect the quality and interpretability of the model.
# Sampling algorithms
Sampling in Bayesian networks is done top-down according to the topological order. For the root nodes, the parametric model describes the distribution from which the random value is to be obtained. For discrete ones this is a multinomial discrete distribution, for continuous ones it is a Gaussian distribution or a mixture of Gaussian distributions. Once the values for the root vertices are obtained, all the following distributions are conditional on the values in the parent vertices.
For discrete values, these will also be random values obtained according to the table of conditional distributions estimated from the data in the case of discrete parents or obtained from the classification model if at least one parent is continuous. In the continuous case, the distribution parameters for sampling are obtained using gaussian mixture regression (GMR). If the simple option is chosen, such a model will be degenerate and will have only one mixture element.
\ No newline at end of file
Clone repository
  • About BAMT algorithms
  • Examples
  • Home
  • Local Setup
  • Project structure
  • User Manual