#' peRspective: Interface to the Perspective API
#'
#' Provides access to the Perspective API (\url{http://www.perspectiveapi.com/}). Perspective is an API that uses machine learning models to score the perceived impact a comment might have on a conversation.
#' `peRspective` provides access to the API using the R programming language.
#' For an excellent documentation of the Perspective API see [here](https://developers.perspectiveapi.com/s/docs).
#'
#' @section Get API Key:
#' [Follow these steps](https://developers.perspectiveapi.com/s/docs-get-started) as outlined by the Perspective API to get an API key.
#'
#' @section Suggested Usage of API Key:
#' \pkg{peRspective} functions will read the API key from
#' environment variable \code{perspective_api_key}.
#' You can specify it like this at the start of your script:
#'
#' \code{Sys.setenv(perspective_api_key = "**********")}
#'
#' To start R session with the
#' initialized environment variable create an \code{.Renviron} file in your R home
#' with a line like this:
#'
#' \code{perspective_api_key = "**********"}
#'
#' To check where your R home is, try \code{normalizePath("~")}.
#'
#' @section Quota and character length Limits:
#' You can check your quota limits by going to [your google cloud project's Perspective API page](https://console.cloud.google.com/apis/api/commentanalyzer.googleapis.com/quotas), and check
#' your projects quota usage at
#' [the cloud console quota usage page](https://console.cloud.google.com/iam-admin/quotas).
#'
#' The maximum text size per request is 3000 bytes.
#'
#' @section Models in Productions:
#'
#' The following production-ready models are **recommended** for use. They have been tested
#' across multiple domains and trained on hundreds of thousands of comments tagged
#' by thousands of human moderators. These are available in **English (en), Spanish, (es), French (fr), German (de), Portuguese (pt), Italian (it), Russian (ru)**.
#'
#' * **TOXICITY**: rude, disrespectful, or unreasonable comment that is likely to
#' make people leave a discussion. This model is a
#' [Convolutional Neural Network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN)
#' trained with [word-vector](https://www.tensorflow.org/tutorials/text/word2vec)
#' inputs.
#' * **SEVERE_TOXICITY**: This model uses the same deep-CNN algorithm as the
#' TOXICITY model, but is trained to recognize examples that were considered
#' to be 'very toxic' by crowdworkers. This makes it much less sensitive to
#' comments that include positive uses of curse-words for example. A labelled dataset
#' and details of the methodology can be found in the same [toxicity dataset](https://figshare.com/articles/dataset/Wikipedia_Talk_Labels_Toxicity/4563973) that is
#' available for the toxicity model.
#'
#' @section Experimental models:
#'
#' The following experimental models give more fine-grained classifications than
#' overall toxicity. They were trained on a relatively smaller amount of data
#' compared to the primary toxicity models above and have not been tested as
#' thoroughly.
#'
#' * **IDENTITY_ATTACK**: negative or hateful comments targeting someone because of their identity.
#' * **INSULT**: insulting, inflammatory, or negative comment towards a person
#' or a group of people.
#' * **PROFANITY**: swear words, curse words, or other obscene or profane
#' language.
#' * **THREAT**: describes an intention to inflict pain, injury, or violence
#' against an individual or group.
#' * **SEXUALLY_EXPLICIT**: contains references to sexual acts, body parts, or
#' other lewd content.
#' * **FLIRTATION**: pickup lines, complimenting appearance, subtle sexual
#' innuendos, etc.
#'
#' For more details on how these were trained, see the [Toxicity and sub-attribute annotation guidelines](https://github.com/conversationai/conversationai.github.io/blob/master/crowdsourcing_annotation_schemes/toxicity_with_subattributes.md).
#'
#' @section New York Times moderation models:
#'
#' The following experimental models were trained on New York Times data tagged by
#' their moderation team.
#'
#' * **ATTACK_ON_AUTHOR**: Attack on the author of an article or post.
#' * **ATTACK_ON_COMMENTER**: Attack on fellow commenter.
#' * **INCOHERENT**: Difficult to understand, nonsensical.
#' * **INFLAMMATORY**: Intending to provoke or inflame.
#' * **LIKELY_TO_REJECT**: Overall measure of the likelihood for the comment to
#' be rejected according to the NYT's moderation.
#' * **OBSCENE**: Obscene or vulgar language such as cursing.
#' * **SPAM**: Irrelevant and unsolicited commercial content.
#' * **UNSUBSTANTIAL**: Trivial or short comments.
#'
#'
#'
#' @section Don't forget to regain your spirits:
#'
#' Analyzing toxic comments can be disheartening sometimes. Feel free to look at this picture of cute kittens whenever you need to:
#'
#'
#' \if{html}{\figure{kittens.jpg}{Kittens}}
#' \if{latex}{\figure{kittens.jpg}{options: width=0.5in}}
#'
#' @md
#' @docType package
#' @name perspective-package
#' @aliases peRspective
NULL
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.