AI, Blog, Machine Learning, Python

Basic Steps of NLP(Natural Language Processing) with Real Example !

The steps for NLP (Natural Language Processing) typically include the following:

  1. Text preprocessing: This step involves cleaning and preparing the text data for analysis. This can include tasks such as tokenization, removing stop words, and stemming or lemmatization.
  2. Exploratory data analysis: This step involves analyzing the text data to gain insights and understand the structure of the data. This can include tasks such as creating word frequency plots, identifying common phrases, and creating word clouds.
  3. Feature extraction: This step involves extracting meaningful features from the text data that can be used for further analysis or modeling. This can include tasks such as creating a bag of words representation, creating a TF-IDF matrix, or extracting named entities.
  4. Modeling: This step involves training a model on the text data to perform a specific task such as sentiment analysis, language translation, or text summarization.
  5. Evaluation: This step involves evaluating the performance of the model using metrics such as accuracy, precision, recall, and F1-score.

Simple Example of NLP pipeline using python

# Import necessary libraries
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords

# Text preprocessing
text = "This is an example of NLP. It includes tokenization, stop word removal, and stemming."
tokens = word_tokenize(text)
stop_words = set(stopwords.words("english"))
filtered_tokens = [token for token in tokens if token not in stop_words]

# Stemming
stemmer = nltk.PorterStemmer()
stemmed_tokens = [stemmer.stem(token) for token in filtered_tokens]

print(stemmed_tokens)

This example uses the NLTK library in python to perform tokenization, stop word removal, and stemming on a sample text. The output is a list of stemmed tokens.

Here is a more detailed step-by-step explanation of an NLP pipeline, including code examples:

  1. Text preprocessing: This step involves cleaning and preparing the text data for analysis. This can include tasks such as tokenization, removing stop words, and stemming or lemmatization.
import nltk
from nltk.tokenize import word_tokenize

# Tokenization
text = "This is an example of NLP. It includes tokenization."
tokens = word_tokenize(text)
print(tokens)
# Output: ['This', 'is', 'an', 'example', 'of', 'NLP', '.', 'It', 'includes', 'tokenization', '.']

The above code uses NLTK’s word_tokenize function to tokenize the text into a list of words.

from nltk.corpus import stopwords

# Stop word removal
stop_words = set(stopwords.words("english"))
filtered_tokens = [token for token in tokens if token not in stop_words]
print(filtered_tokens)
# Output: ['example', 'NLP', '.', 'includes', 'tokenization', '.']

The above code uses NLTK’s stopwords corpus to remove common stop words such as “is”, “an”, and “of” from the tokens.

from nltk.stem import PorterStemmer

# Stemming
stemmer = PorterStemmer()
stemmed_tokens = [stemmer.stem(token) for token in filtered_tokens]
print(stemmed_tokens)
# Output: ['exampl', 'nlp', '.', 'includ', 'token', '.']

The above code uses NLTK’s PorterStemmer to stem the remaining tokens, reducing them to their base form.

2. Exploratory data analysis: This step involves analyzing the text data to gain insights and understand the structure of the data. This can include tasks such as creating word frequency plots, identifying common phrases, and creating word clouds.

from nltk.probability import FreqDist
import matplotlib.pyplot as plt

# Creating word frequency plot
fdist = FreqDist(tokens)
fdist.plot(30,cumulative=False)
plt.show()

The above code uses NLTK’s FreqDist to create a frequency distribution of the tokens, and then plots the 30 most common words.

from nltk.util import ngrams

# Creating n-grams
n = 2
ngrams = list(ngrams(tokens, n))
print(ngrams)

The above code uses NLTK’s ngrams function to create a list of bigrams, the resulting list of tuples contains the n-grams.

3. Feature extraction: This step involves extracting meaningful features from the text data that can be used for further analysis or modeling. This can include tasks such as creating a bag of words representation, creating a TF-IDF matrix, or extracting named entities.

from sklearn.feature_extraction.text import CountVectorizer

# Creating bag of words representation
corpus = ['This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',]
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
print(X.toarray())

The above code creates a bag of words representation of a corpus of four sentences. The resulting sparse matrix contains the frequency of each word in the corpus.

Another example of feature extraction is creating a TF-IDF matrix,

from sklearn.feature_extraction.text import TfidfVectorizer

# Creating TF-IDF matrix
corpus = ['This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',]
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
print(X.toarray())

The above code creates a TF-IDF matrix of a corpus of four sentences. The resulting sparse matrix contains the TF-IDF value of each word in the corpus.

4. Modeling: This step involves training a model on the text data to perform a specific task such as sentiment analysis, language translation, or text summarization.

from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split

# Splitting the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Training a classifier
clf = MultinomialNB()
clf.fit(X_train, y_train)

# Making predictions
y_pred = clf.predict(X_test)

The above code uses a MultinomialNB classifier from the scikit-learn library to train a model on the text data and predict the target variable(sentiment in this case) on the test set.

5. Evaluation: This step involves evaluating the performance of the model using metrics such as accuracy, precision, recall, and F1-score.from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# Evaluation metrics
acc = accuracy_score(y_test, y_pred)
prec = precision_score(y_test, y_pred)
rec = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)

print("Accuracy: {:.2f}%".format(acc*100))
print("Precision: {:.2f}%".format(prec*100))
print("Recall: {:.2f}%".format(rec*100))
print("F1-Score: {:.2f}%".format(f1*100))

The above code uses the accuracy_scoreprecision_scorerecall_score, and f1_score functions from the scikit-learn library to evaluate the performance of the model. These metrics are commonly used to evaluate the performance of text classification models, and provide a quick way to understand how well the model is performing.

It is worth noting that different NLP problems require different approaches and techniques, so the specific steps and code examples provided above may not be directly applicable to all NLP tasks. Additionally, the code examples provided are just for demonstration purposes and may not be directly usable without additional modifications and fine-tuning.

Some code I have not given the output and only provide the code for student to try themselve on there own corpus of sentences or Database.

Summary-

Natural Language Processing (NLP) is a field of Artificial Intelligence that deals with the interaction between computers and humans using natural language. The process of NLP involves various steps such as text preprocessing, exploratory data analysis, feature extraction, modeling, and evaluation.

Text preprocessing involves cleaning and preparing the text data for analysis. This can include tasks such as tokenization, stop word removal, and stemming or lemmatization. Exploratory data analysis involves analyzing the text data to gain insights and understand the structure of the data. This can include tasks such as creating word frequency plots, identifying common phrases, and creating word clouds.

Feature extraction is the step of extracting meaningful features from the text data that can be used for further analysis or modeling. This can include tasks such as creating a bag of words representation, creating a TF-IDF matrix, or extracting named entities. Modeling is the step of training a model on the text data to perform a specific task such as sentiment analysis, language translation, or text summarization. Evaluation is the step of evaluating the performance of the model using metrics such as accuracy, precision, recall, and F1-score.

In summary, NLP is an essential field that helps computers to understand, interpret and generate human language. It is widely used in various applications such as speech recognition, machine translation, sentiment analysis, and text summarization. NLP techniques and tools allow computers to understand human language, making it possible for them to interact with humans in a more natural and intuitive way.

Best of Luck!!

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Do Clap in article if you like it, that will really encourage me. Thanks

Robot Fingers on Blue Background · Free Stock Photo (pexels.com)

#Naturallanguageprocessing

#Artificial Intelligence

#Towards Data Science

#Machine Learning

#Beginners Guide

Leave a Reply