Model Card for UIGEN-T1.1
New and Improved reasoning traces. Better ui generation. Smarter decisions. Better code generation! Trained on a 700+ dataset. 
USE BUDGET FORCING (putting the word answer or think at the end of the assistant generation to keep generationg more thinking and use 'answer' to write code.)
SFT on a 4090 for 4 hours.
Model Summary
UIGEN-T1.1 is a 14-billion parameter transformer model fine-tuned on Qwen2.5-Coder-14B-Instruct. It is designed for reasoning-based UI generation, leveraging a complex chain-of-thought approach to produce robust HTML and CSS-based UI components. Currently, it is limited to basic applications such as dashboards, landing pages, and sign-up forms.
Model Details
Model Description
UIGEN-T1.1 generates HTML and CSS-based UI layouts by reasoning through design principles. While it has a strong chain-of-thought reasoning process, it is currently limited to text-based UI elements and simpler frontend applications. The model excels at dashboards, landing pages, and sign-up forms, but lacks advanced interactivity (e.g., JavaScript-heavy functionalities).
- Developed by: smirki
 - Shared by: smirki
 - Model type: Transformer-based
 - Language(s) (NLP): English
 - License: Apache 2.0
 - Finetuned from model: Qwen2.5-Coder-14B-Instruct
 
Model Sources
- Repository: (Will be uploaded to GitHub soon)
 - Hosted on: Hugging Face
 - Demo: Coming soon
 
Uses
Direct Use
- Generates HTML and CSS code for basic UI elements
 - Best suited for dashboards, landing pages, and sign-up forms
 - Requires manual post-processing to refine UI outputs
 - May require using the word "answer" at the end of the input prompt to get better inference
 
Downstream Use (optional)
- Can be fine-tuned further for specific frontend frameworks (React, Vue, etc.)
 - May be integrated into no-code/low-code UI generation tools
 
Out-of-Scope Use
- Not suitable for complex frontend applications involving JavaScript-heavy interactions
 - May not generate fully production-ready UI code
 - Limited design variety – biased towards basic frontend layouts
 
Bias, Risks, and Limitations
Biases
- Strong bias towards basic frontend design patterns (may not generate creative or advanced UI layouts)
 - May produce repetitive designs due to limited training scope
 
Limitations
- Artifacting issues: Some outputs may contain formatting artifacts
 - Limited generalization: Performs best in HTML + CSS UI generation, but not robust for complex app logic
 - May require prompt engineering (e.g., adding "answer" to input for better results)
 
How to Get Started with the Model
Example Model Template
<|im_start|>user
{question}<|im_end|>
<|im_start|>assistant
<|im_start|>think
{reasoning}<|im_end|>
<|im_start|>answer
Basic Inference Code
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "smirki/UIGEN-T1.1-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")
prompt = """<|im_start|>user
Make a dark-themed dashboard for an oil rig.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=12012, do_sample=True, temperature=0.7) #max tokens has to be greater than 12k
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
- Based on: Qwen2.5-Coder-14B-Instruct
 - Fine-tuned on: UI-related datasets with reasoning-based HTML/CSS examples
 
Training Procedure
- Preprocessing: Standard text-tokenization using Hugging Face transformers
 - Training Precision: bf16 mixed precision quantized to q8
 
Evaluation
Testing Data, Factors & Metrics
- Testing Data: Internal UI design-related datasets
 - Evaluation Factors: Bias towards basic UI components, robustness in reasoning, output quality
 - Metrics: Subjective evaluation based on UI structure, correctness, and usability
 
Results
- Strengths:  
- Good at reasoning-based UI layouts
 - Generates structured and valid HTML/CSS
 
 - Weaknesses:  
- Limited design diversity
 - Artifacting in outputs
 
 
Technical Specifications
Model Architecture and Objective
- Architecture: Transformer-based LLM fine-tuned for UI reasoning
 - Objective: Generate robust frontend UI layouts with chain-of-thought reasoning
 
Compute Infrastructure
- Hardware Requirements: 12GB VRAM reccomended
 - Software Requirements:  
- Transformers library (Hugging Face)
 - PyTorch
 
 
Citation
If using this model, please cite:
BibTeX:
@misc{smirki_UIGEN-T1.1,
  title={UIGEN-T1.1.1: Chain-of-Thought UI Generation Model},
  author={smirki},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/smirki/UIGEN-T1.11}
}
More Information
- GitHub Repository: (Coming soon)
 - Web Demo: (Coming soon)
 
Model Card Authors
- Author: smirki
 
Model Card Contact
- Contact: smirki on Hugging Face
 
- Downloads last month
 - 10
 
Model tree for Tesslate/UIGEN-T1.1-Qwen-14B
Base model
Qwen/Qwen2.5-14B








