Title: | An Implementation of Non-Dominated Sorting Genetic Algorithm III for Feature Selection |
---|---|
Description: | An adaptation of Non-dominated Sorting Genetic Algorithm III for multi objective feature selection tasks. Non-dominated Sorting Genetic Algorithm III is a genetic algorithm that solves multiple optimization problems simultaneously by applying a non-dominated sorting technique. It uses a reference points based selection operator to explore solution space and preserve diversity. See the original paper by K. Deb and H. Jain (2014) <DOI:10.1109/TEVC.2013.2281534> for a detailed description. |
Authors: | Artem Shramko |
Maintainer: | Artem Shramko <[email protected]> |
License: | GPL-3 |
Version: | 0.0.3 |
Built: | 2025-02-23 03:46:59 UTC |
Source: | https://github.com/cran/nsga3 |
This dataset classifies people described by a set of attributes as good or bad credit risks.
german_credit
german_credit
A data frame with 1000 rows and 20 variables:
Factor. Status of existing checking account
Numeric. Duration in month
Factor. Purpose
Factor. Credit history
Numeric. Credit amount
Numeric. Savings account/bonds
Factor Present employment since
Integer. Installment rate in percentage of disposable income
Factor. Personal status and gender
Factor. Other debtors / guarantors
Numeric. Present residence since
Factor. Property
Numeric. Age in years
Factor. Other installment plans
Factor. Housing
Numeric. num_credits
Factor. Job
Numeric. Number of people being liable to provide maintenance for
Factor. Telephone
Factor. foreign worker
Factor. Target feature. 1 = BAD
...
Professor Dr. Hofmann, Hans (1994). UCI Machine Learning Repository https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data). Hamburg, Germany: Universitaet Hamburg, Institut fuer Statistik und "Oekonometrie.
An adaptation of Non-dominated Sorting Genetic Algorithm III for multi objective feature selection tasks. Non-dominated Sorting Genetic Algorithm III is a genetic algorithm that solves multiple optimization problems simultaneously by applying a non-dominated sorting technique. It uses a reference points based selection operator to explore solution space and preserve diversity. See the paper by K. Deb and H. Jain (2014) <DOI:10.1109/TEVC.2013.2281534> for a detailed description of the algorithm.
nsga3fs(df, target, obj_list, obj_names, pareto, pop_size, max_gen, model, resampling = FALSE, num_features = TRUE, mutation_rate = 0.1, threshold = 0.5, feature_cost = FALSE, r_measures = list(mlr::mmce), cpus = 1)
nsga3fs(df, target, obj_list, obj_names, pareto, pop_size, max_gen, model, resampling = FALSE, num_features = TRUE, mutation_rate = 0.1, threshold = 0.5, feature_cost = FALSE, r_measures = list(mlr::mmce), cpus = 1)
df |
An original dataset. |
target |
Name of a column (a string), which contains classification target variable. |
obj_list |
A List of objective functions to be optimizied. Must be a list of objects of type closure. |
obj_names |
A Vector of the names of objective functions. Must match the atguments passed to pareto. |
pareto |
A Pareto criteria for non-dominated sorting. Should be passed in a form:
|
pop_size |
Size of the population. |
max_gen |
Number of generations. |
model |
A |
resampling |
A |
num_features |
TRUE if algorithm should minimize number of features as one of objectives. You must pass a respective object to pareto as well as obj_names. |
mutation_rate |
Probability of switching the value of a certain gene to its opposite. Default value 0.1. |
threshold |
Threshold applied during majority vote when calculating final output. Default value 0.5. |
feature_cost |
A vector of feacure costs. Must be equal ncol(df)-1. You must pass a respective object to pareto as well as obj_names. |
r_measures |
A list of performance metrics for |
cpus |
Number of sockets to be used for parallelisation. Default value is 1. |
A list with the final Pareto Front:
A list containing two items:
A list with final Pareto Front individuals
A data.frame containing respective fitness values
Same content, structured per individual
Pareto Front majority vote for dataset features
Runtime, dataset details, model
Be cautious with setting the size of population and maximum generations. Since NSGA III is a wrapper feature selection method, a model has to be retrained N*number of generation +1 times, which may involve high computational costs. A 100 x 100 setting should be enough.
This adaptation of NSGA III algorithm for Multi Objective Feature Selection is currently available only for classification tasks.
#'As any other Genetic Algorithm (GA), NSGA III includes following steps:
An initial population Pt of a size N is created
A model is trained on each individual (subset) and fitness values are assigned
An offsping population of a size N is created by crossover and mutation operators
The offspring population is combined with its parent population
A combined population of a size 2N is split into Pareto Fronts using non-dominated sorting technique
A next generation's population Pt+1 of size N is selected from the top Pareto Fronts with help of elitism based selection operator
The loop is repeated until the final generation is reached
Each generation is populated by individuals representing different subsets. Each individual is represented as a binary vector, where each gene represents a feature in the original dataset.
K. Deb, H. Jain (2014) <DOI:10.1109/TEVC.2013.2281534>
xgb_learner <- mlr::makeLearner("classif.xgboost", predict.type = "prob", par.vals = list( objective = "binary:logistic", eval_metric = "error",nrounds = 2)) rsmp <- mlr::makeResampleDesc("CV", iters = 2) measures <- list(mlr::mmce) f_auc <- function(pred){auc <- mlr::performance(pred, auc) return(as.numeric(auc))} objective <- c(f_auc) o_names <- c("AUC", "nf") par <- rPref::high(AUC)*rPref::low(nf) nsga3fs(df = german_credit, target = "BAD", obj_list = objective, obj_names = o_names, pareto = par, pop_size = 1, max_gen = 1, model = xgb_learner, resampling = rsmp, num_features = TRUE, r_measures = measures, cpus = 2)
xgb_learner <- mlr::makeLearner("classif.xgboost", predict.type = "prob", par.vals = list( objective = "binary:logistic", eval_metric = "error",nrounds = 2)) rsmp <- mlr::makeResampleDesc("CV", iters = 2) measures <- list(mlr::mmce) f_auc <- function(pred){auc <- mlr::performance(pred, auc) return(as.numeric(auc))} objective <- c(f_auc) o_names <- c("AUC", "nf") par <- rPref::high(AUC)*rPref::low(nf) nsga3fs(df = german_credit, target = "BAD", obj_list = objective, obj_names = o_names, pareto = par, pop_size = 1, max_gen = 1, model = xgb_learner, resampling = rsmp, num_features = TRUE, r_measures = measures, cpus = 2)