Custom Doc2Vec Embeddings
This model contains custom word embeddings trained using Gensim's Doc2Vec implementation.
Model details
- Trained using Gensim's Word2Vec
 - Includes custom n-grams as tokens
 - Vector size: 256
 - Context window: 5
 - Training algorithm: Skip-gram
 
Doc2Vec Model Components
This folder contains all components of the Doc2Vec model:
- doc2vec_model.model: The complete model that can be loaded with Doc2Vec.load()
 - model_parameters.pkl: Dictionary of all model parameters
 - word_vocabulary.pkl: Dictionary mapping words to indices
 - word_vectors.npy: NumPy array of word vectors
 - word_list.pkl: List of words corresponding to word_vectors.npy
 - doc_vectors.npy: NumPy array of document vectors (if available)
 - doc_tags.pkl: List of document tags corresponding to doc_vectors.npy
 
To load the model:
from gensim.models.doc2vec import Doc2Vec
model = Doc2Vec.load("doc2vec_model.model")
- Downloads last month
 - 3
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support