From: AI and Neurology
Term | Definition |
---|---|
Algorithm | A step-by-step procedure or set of rules for solving a problem or performing a task. |
Neural Network | A computational model consisting of interconnected nodes (neurons) that process data. |
Feature | An individual measurable property or characteristic used as input to a model. |
Feature Engineering | The process of selecting, modifying, or creating features to improve the performance of a machine learning model. |
Input | The data provided to an AI system or model to process and analyze. |
Output | The result produced by an AI system or model after processing the input data. |
Training Data | A dataset used to train an AI model, allowing it to learn patterns and relationships. |
Testing Data | A dataset used to evaluate the performance of a trained AI model. |
Supervised Learning | A machine learning approach where models are trained on labeled data, learning to map inputs to outputs. |
Unsupervised Learning | A machine learning approach where models find patterns in unlabeled data without specific guidance on what to predict. |
Reinforcement Learning | A machine learning paradigm where agents learn by interacting with an environment and receiving feedback through rewards or penalties. |
Overfitting | When a model learns the training data too well, capturing noise or irrelevant data variables and reducing its ability to generalize to new data. |
Underfitting | When a model is too simple to capture the underlying patterns in the data, resulting in poor performance on both training and testing data. |
Bias | A systematic error introduced by a model, causing it to consistently favor certain outcomes or predictions. |
Variance | The variability of model predictions across different datasets, often leading to overfitting if too high. |
Hyperparameters | The settings or parameters of a machine learning algorithm that are set before training and control the learning process. |