CSQA GPT2-Large Context-Aware Model
This model is a GPT2-large based model fine-tuned for the CommonsenseQA (CSQA) task with context-aware capabilities.
Model Architecture
This is a multi-component model that includes:
- Encoder Model: GPT2-large based encoder with adapter layers
- Latent Model: GPT2-large based latent representation model with adapter layers
- Decoder Model: GPT2-large based decoder with adapter layers
- Projection Layers: Linear projections between encoder-latent and latent-decoder components
Files Structure
encoder.pt/encoder_model/: Encoder component weights and configurationlatent_model.pt/latent_model/: Latent model component weights and configurationdecoder.pt/decoder_model/: Decoder component weights and configurationencoder_to_latent_model_proj.pt: Projection layer from encoder to latent modellatent_model_to_decoder_proj.pt: Projection layer from latent model to decodertokenizer/: GPT2 tokenizer filesconfig.json: Model configuration
Usage
This model was trained for the CommonsenseQA task and includes specialized components for context-aware reasoning.
Training
The model was trained in multiple stages on the CommonsenseQA dataset, incorporating context-aware mechanisms to improve reasoning capabilities.