FDR Merged Model
This is a complete conversational model created by merging:
- Base Model: microsoft/DialoGPT-small
- LoRA Adapter: PhillyMac/fdr-lora-kaggle-v1
Usage
This model can be used directly with HuggingFace InferenceClient or transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PhillyMac/fdr-merged-final")
model = AutoModelForCausalLM.from_pretrained("PhillyMac/fdr-merged-final")
prompt = "### Instruction:\nWhat makes an effective leader?\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Or with InferenceClient:
from huggingface_hub import InferenceClient
client = InferenceClient(model="PhillyMac/fdr-merged-final", token="your_token")
response = client.text_generation(
"What makes an effective leader?",
max_new_tokens=200,
temperature=0.7
)
Model Details
- Combined the conversational abilities of DialoGPT-small with FDR's leadership voice
- Trained on curated FDR leadership content
- Ready for production use in RAG systems
- Compatible with HuggingFace Inference API
- Downloads last month
- 9
Model tree for PhillyMac/fdr-merged-final
Base model
microsoft/DialoGPT-small