Portal:Artificial intelligence
Four approaches summarize past attempts to define the field:
- The study of systems that think like humans.
- The study of systems that think rationally.
- The study of systems that act like humans.
- The study of systems that act rationally.
Of these approaches, the former two are considered to be "white-box" approaches because they require our analysis of intelligence to be based on the rationale for the behaviour rather than the behaviour itself. The latter two are considered "black-box" approaches because they operationalize intelligence by measuring performance over a task domain. We prefer the latter two because they allow for quantitative comparisons between systems rather than requiring a qualitative comparison of rationales. We realize that the ultimate performance of a system will depend heavily on the task domain that it is situated in, and this motivates our preference for studying activity (behaviour) rather than thought (rationale).
Although the third approach, (known as cognitive modelling), is of great importance to cognitive scientists, we concern ourselves with the fourth approach. Of the four, this approach allows to consider the performance of a theoretical system that yields the behaviour optimally suited to achieve its goals, given the information available to it.
This approach motivates us to provide a model for our intelligent systems known as the intelligent agent.
See: Learning Projects and the Wikiversity:Learning model.
Learning materials and learning projects are located in the main Wikiversity namespace. Simply make a link to the name of the learning project (learning projects are independent pages in the main namespace) and start writing! We suggest the use of the learning project template (use "subst:Learning project boilerplate" on the new page, inside the double curved brackets {{}}).
Learning materials and learning projects can be used by multiple departments. Cooperate with other departments that use the same learning resource. Understanding AI as a field of Computer Science involves a thorough understanding of the following topics:
Remember, Wikiversity has adopted the "learning by doing" model for education. Lessons should center on learning activities for Wikiversity participants. We learn by doing.
Select a descriptive name for each learning project.
Applied project
Research projects
Readings
Wikipedia
See also
External links
- Numenta Platform for Intelligent Computing (NuPIC)
- 'WIKIPEDIA AND ARTIFICIAL INTELLIGENCE: AN EVOLVING SYNERGY' - A workshop
- Common-sense Computing @ MIT Media Lab Project page
- MLOps Wiki - A glossary of machine learning terms
- AI Research - A collection of research-based articles in AI space
- Artificial Intelligence: A Modern Approach, companion to the popular textbook
Documentation, manuals
- https://platform.openai.com/
- https://huggingface.co/docs
- Hub Host Git-based models, datasets and Spaces on the Hugging Face Hub.
- Transformers State-of-the-art ML for Pytorch, TensorFlow, and JAX.
- Diffusers State-of-the-art diffusion models for image and audio generation in PyTorch.
- Datasets Access and share datasets for computer vision, audio, and NLP tasks.
- Hub Python Library Client library for the HF Hub: manage repositories from your Python runtime.
- Huggingface.js A collection of JS libraries to interact with Hugging Face, with TS types included.
- Transformers.js Community library to run pretrained models from Transformers in your browser.
- Inference API (serverless) Experiment with over 200k models easily using the serverless tier of Inference Endpoints.
- Inference Endpoints (dedicated) Easily deploy models to production on dedicated, fully managed infrastructure.
- PEFT Parameter efficient finetuning methods for large models.
- Accelerate Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.
- Optimum Fast training and inference of HF Transformers with easy to use hardware optimization tools.
- AWS Trainium & Inferentia Train and Deploy Transformers & Diffusers with AWS Trainium and AWS Inferentia via Optimum
- Tokenizers Fast tokenizers, optimized for both research and production.
- Evaluate Evaluate and report model performance easier and more standardized.
- Tasks All things about ML tasks: demos, use cases, models, datasets, and more!
- Dataset viewer API to access the contents, metadata and basic statistics of all Hugging Face Hub datasets.
- TRL Train transformer language models with reinforcement learning.
- Amazon SageMaker Train and Deploy Transformer models with Amazon SageMaker and Hugging Face DLCs.
- timm State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.
- Safetensors Simple, safe way to store and distribute neural networks weights safely and quickly.
- Text Generation Inference Toolkit to serve Large Language Models.
- AutoTrain AutoTrain API and UI.
- Text Embeddings Inference Toolkit to serve Text Embedding Models.
- Competitions Create your own competitions on Hugging Face.
- Bitsandbytes Toolkit to optimize and quantize models.
- Google TPUs Deploy models on Google TPUs via Optimum.
- Chat UI Open source chat frontend, powers the HuggingChat app.
- Leaderboards Create your own Leaderboards on Hugging Face.
Courses
Fastfook
- https://github.com/fastai/fastbook/
- Your Deep Learning Journey
- From Model to Production
- Data Ethics
- Under the Hood: Training a Digit Classifier
- Image Classification
- Other Computer Vision Problems
- Training a State-of-the-Art Model
- Collaborative Filtering Deep Dive
- Tabular Modeling Deep Dive
- NLP Deep Dive: RNNs
- Data Munging with fastai's Mid-Level API
- A Language Model from Scratch
- Convolutional Neural Networks
- ResNets
- Application Architectures Deep Dive
- The Training Process
- A Neural Net from the Foundations
- CNN Interpretation with CAM
- A fastai Learner from Scratch
- Concluding Thoughts
- Appendix: Jupyter Notebook 101
Hugging Face NLP
- Natural Language Processing (NLP) course
- transformer models
- using transformers:
- fine-tuning a pretrained model:
- preprocessing, map, dataset, dynamic padding, batch, collate function, train, predict, evaluate, accelerate
- sharing models and tokenizers:
- hub, model card
- the datasets library:
- batch, DataFrame, validation, splitting, embedding, FAISS
- the tokenizers library:
- training tokenizer, grouping, QnA, normalizers, pre-tokenization, models,trainers: BPE, WordPiece, Unigram, post processors, decoders
- main nlp tasks:
- token classification, metrics, perplexity, translation, summarization, training CLM, QnA,
- how to ask for help
- building and sharing demos