YAML Metadata
		Warning:
	empty or missing yaml metadata in repo card
	(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CSQA GPT2-Large Context-Aware Model
This model is a GPT2-large based model fine-tuned for the CommonsenseQA (CSQA) task with context-aware capabilities.
Model Architecture
This is a multi-component model that includes:
- Encoder Model: GPT2-large based encoder with adapter layers
- Latent Model: GPT2-large based latent representation model with adapter layers
- Decoder Model: GPT2-large based decoder with adapter layers
- Projection Layers: Linear projections between encoder-latent and latent-decoder components
Files Structure
- encoder.pt/- encoder_model/: Encoder component weights and configuration
- latent_model.pt/- latent_model/: Latent model component weights and configuration
- decoder.pt/- decoder_model/: Decoder component weights and configuration
- encoder_to_latent_model_proj.pt: Projection layer from encoder to latent model
- latent_model_to_decoder_proj.pt: Projection layer from latent model to decoder
- tokenizer/: GPT2 tokenizer files
- config.json: Model configuration
Usage
This model was trained for the CommonsenseQA task and includes specialized components for context-aware reasoning.
Training
The model was trained in multiple stages on the CommonsenseQA dataset, incorporating context-aware mechanisms to improve reasoning capabilities.
- Downloads last month
- 25
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support