OPL: Optimal Policy Learning

Provides functions for optimal policy learning in socioeconomic applications helping users to learn the most effective policies based on data in order to maximize empirical welfare. Specifically, 'OPL' allows to find "treatment assignment rules" that maximize the overall welfare, defined as the sum of the policy effects estimated over all the policy beneficiaries. Documentation about 'OPL' is provided by several international articles via Athey et al (2021, <doi:10.3982/ECTA15732>), Kitagawa et al (2018, <doi:10.3982/ECTA13288>), Cerulli (2022, <doi:10.1080/13504851.2022.2032577>), the paper by Cerulli (2021, <doi:10.1080/13504851.2020.1820939>) and the book by Gareth et al (2013, <doi:10.1007/978-1-4614-7138-7>).

Package details

AuthorFederico Brogi [aut, cre], Barbara Guardabascio [aut], Giovanni Cerulli [aut]
MaintainerFederico Brogi <federicobrogi@gmail.com>
LicenseGPL-3
Version1.0.2
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("OPL")

Try the OPL package in your browser

Any scripts or data that you put into this service are public.

OPL documentation built on April 4, 2025, 3:09 a.m.