--- base_model: unsloth/Llama-3.2-3B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Job Skill Extractor - Fine-Tuned Llama Model ## Key Features - **Fast Training and Inference**: Achieved 2x faster performance using Unsloth's techniques. - **Base Model**: `unsloth/Llama-3.2-3B-Instruct`. - **Language Support**: English. - **License**: Apache 2.0. --- ## Intended Use This model is designed to assist in: - Extracting required job skills from job descriptions and titles. - Automating job-skill matching for HR applications. - Enabling intelligent job posting analysis in recruitment systems. --- ## Usage Example Below is an example usage with the Unsloth library: ```python from unsloth import FastLanguageModel # Load model and tokenizer model, tokenizer = FastLanguageModel.from_pretrained( model_name = "batuhanmtl/job-skill-extractor-llama3.2", max_seq_length = 8000, dtype = False, load_in_4bit = False, ) # Enable faster inference FastLanguageModel.for_inference(model) # Define job description prompt prompt_template = f""" ##### JOB TITLE ##### {job_title} ##### JOB DESCRIPTION ##### {job_description} """ # Tokenize input messages = [ {"role": "user", "content": prompt_template} ] inputs = tokenizer.apply_chat_template( messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt", ).to("cuda") # Generate output outputs = model.generate( input_ids=inputs, max_new_tokens=150, use_cache=True, temperature=1.5, min_p=0.1 ) start_token = "<|start_header_id|>assistant<|end_header_id|>" end_token = "<|eot_id|>" output = tokenizer.batch_decode(outputs)[0] skill_list = output.split(start_token)[1].split(end_token)[0].strip() print(skill_list) ``` --- ## Model Overview This fine-tuned Llama model, `batuhanmtl/job-skill-extractor-llama3.2`, is optimized for extracting relevant job skills from job titles and descriptions. It was trained 2x faster using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's TRL (Transformers Reinforcement Learning) library. The model leverages the efficiency of the `unsloth/Llama-3.2-3B-Instruct` base model to provide fast and accurate text generation capabilities. ![Unsloth](https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png) ---