#' fairness: Algorithmic Fairness Metrics
#'
#' The \strong{fairness} package offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The \strong{fairness} R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. The package also offers convenient visualizations to help understand fairness metrics.
#'
#' @details
#' \tabular{ll}{
#' Package: \tab fairness\cr
#' Depends: \tab R (>= 3.5.0)\cr
#' Type: \tab Package\cr
#' Version: \tab 1.2.2\cr
#' Date: \tab 2021-04-14\cr
#' License: \tab MIT\cr
#' LazyLoad: \tab Yes
#' }
#'
#' @author
#' \itemize{
#' \item Nikita Kozodoi \email{n.kozodoi@@icloud.com}
#' \item Tibor V. Varga \email{tirgit@@hotmail.com}
#' }
#'
#' @seealso
#' \url{https://github.com/kozodoi/fairness}
#' \url{https://kozodoi.me/r/fairness/packages/2020/05/01/fairness-tutorial.html}
#'
#' @name fairness
#' @docType package
NULL
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.