YAML Metadata
		Warning:
	empty or missing yaml metadata in repo card
	(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Roleplay Lora trained on llama-7b in 4-bit mode.
Trained for 3 epochs.
uses the https://github.com/teknium1/GPTeacher/tree/main/Roleplay dataset
Training in 4bit is very fast.. only took 1/2 hour on a 3090. Eval against the dataset gave a 3.x perpelexity.
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support