class sklearn.feature_extraction.text.CountVectorizer(input=’content’, encoding=’utf-8’, decode_error=’strict’, strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern=’(?u)\b\w\w+\b’, ngram_range=(1, 1), analyzer=’word’, max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class ‘numpy.int64’>)
[source]
Convert a collection of text documents to a matrix of token counts
This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data.
Read more in the User Guide.
Parameters: |
|
---|---|
Attributes: |
|
See also
The stop_words_
attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling.
>>> from sklearn.feature_extraction.text import CountVectorizer >>> corpus = [ ... 'This is the first document.', ... 'This document is the second document.', ... 'And this is the third one.', ... 'Is this the first document?', ... ] >>> vectorizer = CountVectorizer() >>> X = vectorizer.fit_transform(corpus) >>> print(vectorizer.get_feature_names()) ['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this'] >>> print(X.toarray()) [[0 1 1 1 0 0 1 0 1] [0 2 0 1 0 1 1 0 1] [1 0 0 1 1 0 1 1 1] [0 1 1 1 0 0 1 0 1]]
build_analyzer () | Return a callable that handles preprocessing and tokenization |
build_preprocessor () | Return a function to preprocess the text before tokenization |
build_tokenizer () | Return a function that splits a string into a sequence of tokens |
decode (doc) | Decode the input into a string of unicode symbols |
fit (raw_documents[, y]) | Learn a vocabulary dictionary of all tokens in the raw documents. |
fit_transform (raw_documents[, y]) | Learn the vocabulary dictionary and return term-document matrix. |
get_feature_names () | Array mapping from feature integer indices to feature name |
get_params ([deep]) | Get parameters for this estimator. |
get_stop_words () | Build or fetch the effective stop words list |
inverse_transform (X) | Return terms per document with nonzero entries in X. |
set_params (**params) | Set the parameters of this estimator. |
transform (raw_documents) | Transform documents to document-term matrix. |
__init__(input=’content’, encoding=’utf-8’, decode_error=’strict’, strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern=’(?u)\b\w\w+\b’, ngram_range=(1, 1), analyzer=’word’, max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<class ‘numpy.int64’>)
[source]
build_analyzer()
[source]
Return a callable that handles preprocessing and tokenization
build_preprocessor()
[source]
Return a function to preprocess the text before tokenization
build_tokenizer()
[source]
Return a function that splits a string into a sequence of tokens
decode(doc)
[source]
Decode the input into a string of unicode symbols
The decoding strategy depends on the vectorizer parameters.
Parameters: |
|
---|
fit(raw_documents, y=None)
[source]
Learn a vocabulary dictionary of all tokens in the raw documents.
Parameters: |
|
---|---|
Returns: |
|
fit_transform(raw_documents, y=None)
[source]
Learn the vocabulary dictionary and return term-document matrix.
This is equivalent to fit followed by transform, but more efficiently implemented.
Parameters: |
|
---|---|
Returns: |
|
get_feature_names()
[source]
Array mapping from feature integer indices to feature name
get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: |
|
---|---|
Returns: |
|
get_stop_words()
[source]
Build or fetch the effective stop words list
inverse_transform(X)
[source]
Return terms per document with nonzero entries in X.
Parameters: |
|
---|---|
Returns: |
|
set_params(**params)
[source]
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Returns: |
|
---|
transform(raw_documents)
[source]
Transform documents to document-term matrix.
Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor.
Parameters: |
|
---|---|
Returns: |
|
sklearn.feature_extraction.text.CountVectorizer
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html