Update whisper-large-v3-turbo-coreml-fp16/README.md
Browse files
    	
        whisper-large-v3-turbo-coreml-fp16/README.md
    CHANGED
    
    | @@ -6,7 +6,7 @@ Core ML export of `openai/whisper-large-v3-turbo` tuned for Apple Silicon. This | |
| 6 | 
             
            - `DecoderFull.mlpackage` – full-context fallback
         | 
| 7 | 
             
            - `DecoderStateful.mlpackage` – experimental MLState variant (requires macOS 15+)
         | 
| 8 |  | 
| 9 | 
            -
            Tokenizers, mel filters, and metadata are included so the bundle can be dropped directly into  | 
| 10 |  | 
| 11 | 
             
            ## Contents
         | 
| 12 |  | 
| @@ -32,22 +32,23 @@ preprocessor_config.json | |
| 32 | 
             
            ## Quick Start (Swift)
         | 
| 33 |  | 
| 34 | 
             
            ```swift
         | 
| 35 | 
            -
            let bundleURL = URL(fileURLWithPath: " | 
| 36 | 
            -
            let  | 
| 37 | 
            -
             | 
| 38 | 
            -
             | 
| 39 | 
            -
             | 
| 40 | 
            -
            )
         | 
| 41 | 
            -
             | 
| 42 | 
            -
             | 
| 43 | 
             
            ```
         | 
| 44 |  | 
| 45 | 
            -
            ## Quick Start ( | 
| 46 |  | 
| 47 | 
             
            ```bash
         | 
| 48 | 
            -
             | 
| 49 | 
            -
               | 
| 50 | 
            -
               | 
|  | |
| 51 | 
             
            ```
         | 
| 52 |  | 
| 53 | 
             
            ## Performance Snapshot
         | 
|  | |
| 6 | 
             
            - `DecoderFull.mlpackage` – full-context fallback
         | 
| 7 | 
             
            - `DecoderStateful.mlpackage` – experimental MLState variant (requires macOS 15+)
         | 
| 8 |  | 
| 9 | 
            +
            Tokenizers, mel filters, and metadata are included so the bundle can be dropped directly into any Core ML-driven Whisper integration.
         | 
| 10 |  | 
| 11 | 
             
            ## Contents
         | 
| 12 |  | 
|  | |
| 32 | 
             
            ## Quick Start (Swift)
         | 
| 33 |  | 
| 34 | 
             
            ```swift
         | 
| 35 | 
            +
            let bundleURL = URL(fileURLWithPath: "/path/to/whisper-large-v3-turbo-coreml-fp16")
         | 
| 36 | 
            +
            let encoderURL = bundleURL.appendingPathComponent("Encoder.mlpackage")
         | 
| 37 | 
            +
            let decoderURL = bundleURL.appendingPathComponent("DecoderWithCache.mlpackage")
         | 
| 38 | 
            +
             | 
| 39 | 
            +
            let encoder = try MLModel(contentsOf: encoderURL)
         | 
| 40 | 
            +
            let decoder = try MLModel(contentsOf: decoderURL)
         | 
| 41 | 
            +
            // Plug these into your transcription pipeline together with the tokenizer assets
         | 
| 42 | 
            +
            // located alongside the mlpackages.
         | 
| 43 | 
             
            ```
         | 
| 44 |  | 
| 45 | 
            +
            ## Quick Start (Python parity check)
         | 
| 46 |  | 
| 47 | 
             
            ```bash
         | 
| 48 | 
            +
            python3 scripts/verify_coreml_whisper.py \
         | 
| 49 | 
            +
              --audio /path/to/audio.wav \
         | 
| 50 | 
            +
              --coreml-dir /path/to/whisper-large-v3-turbo-coreml-fp16 \
         | 
| 51 | 
            +
              --max-new-tokens 64
         | 
| 52 | 
             
            ```
         | 
| 53 |  | 
| 54 | 
             
            ## Performance Snapshot
         | 
