rasyosef commited on
Commit
e05f7dd
·
verified ·
1 Parent(s): 5c01cdf

Add new SparseEncoder model

Browse files
Files changed (3) hide show
  1. README.md +108 -149
  2. config_sentence_transformers.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -5,35 +5,26 @@ tags:
5
  - sparse
6
  - splade
7
  - generated_from_trainer
8
- - dataset_size:1200000
9
  - loss:SpladeLoss
10
  - loss:SparseMarginMSELoss
11
  - loss:FlopsLoss
12
- base_model:
13
- - prajjwal1/bert-small
14
  widget:
15
- - text: >-
16
- Donate to the Breast Cancer Research Foundation Now BCRF is the largest
17
- nonprofit funder of breast cancer research worldwide. Over the years, it has
18
- raised more than half a billion dollars in support of research that has made
19
- a major impact on how we view and treat breast cancer.
20
- - text: >-
21
- Macular degeneration—Loss of central vision, blurred vision (especially
22
- while reading), distorted vision (like seeing wavy lines), and colors
23
- appearing faded. The most common cause of blindness in people over age 60.
24
- Eye infection, inflammation, or injury.
25
- - text: how do i find the tongue weight of a trailer?
26
- - text: >-
27
- Feathers (1-3) Pidgey are docile Pokémon, and generally prefer to flee from
28
- their enemies rather than fight them. Pidgey's small size permits it to hide
29
- easily in long grass, where it is typically found foraging for small
30
- insects. It is known to flush out potential prey from long grass by flapping
31
- its wings rapidly.
32
- - text: >-
33
- 10 hilariously insightful foreign words. One of the most obvious differences
34
- between cognac and whiskey is that cognac makers use grapes, and whiskey
35
- makers use grains. Although both processes use fermentation to create the
36
- liquors, cognac makers use a double distillation process.
37
  pipeline_tag: feature-extraction
38
  library_name: sentence-transformers
39
  metrics:
@@ -67,90 +58,94 @@ model-index:
67
  type: unknown
68
  metrics:
69
  - type: dot_accuracy@1
70
- value: 0.5172
71
  name: Dot Accuracy@1
72
  - type: dot_accuracy@3
73
- value: 0.8368
74
  name: Dot Accuracy@3
75
  - type: dot_accuracy@5
76
- value: 0.9232
77
  name: Dot Accuracy@5
78
  - type: dot_accuracy@10
79
- value: 0.9762
80
  name: Dot Accuracy@10
81
  - type: dot_precision@1
82
- value: 0.5172
83
  name: Dot Precision@1
84
  - type: dot_precision@3
85
- value: 0.2866666666666667
86
  name: Dot Precision@3
87
  - type: dot_precision@5
88
- value: 0.1924
89
  name: Dot Precision@5
90
  - type: dot_precision@10
91
- value: 0.10273999999999998
92
  name: Dot Precision@10
93
  - type: dot_recall@1
94
- value: 0.5006
95
  name: Dot Recall@1
96
  - type: dot_recall@3
97
- value: 0.8237833333333332
98
  name: Dot Recall@3
99
  - type: dot_recall@5
100
- value: 0.91535
101
  name: Dot Recall@5
102
  - type: dot_recall@10
103
- value: 0.9723333333333332
104
  name: Dot Recall@10
105
  - type: dot_ndcg@10
106
- value: 0.7553714776897319
107
  name: Dot Ndcg@10
108
  - type: dot_mrr@10
109
- value: 0.6876940476190507
110
  name: Dot Mrr@10
111
  - type: dot_map@100
112
- value: 0.6829029994536953
113
  name: Dot Map@100
114
  - type: query_active_dims
115
- value: 29.71980094909668
116
  name: Query Active Dims
117
  - type: query_sparsity_ratio
118
- value: 0.9990262826502491
119
  name: Query Sparsity Ratio
120
  - type: corpus_active_dims
121
- value: 168.3538420216879
122
  name: Corpus Active Dims
123
  - type: corpus_sparsity_ratio
124
- value: 0.9944841805248121
125
  name: Corpus Sparsity Ratio
126
- license: mit
127
- datasets:
128
- - microsoft/ms_marco
129
- language:
130
- - en
131
  ---
132
 
133
- # SPLADE-BERT-Small-Distil
134
 
135
- This is a SPLADE sparse retrieval model based on BERT-Small (29M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was [ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2).
 
136
 
137
- This SPLADE model is `2x` smaller than Naver's official `splade-v3-distilbert` while having `91%` of it's performance on the MSMARCO benchmark. This model is small enough to be used without a GPU on a dataset of a few thousand documents.
 
 
 
 
 
 
 
 
138
 
139
- - `Collection:` https://huggingface.co/collections/rasyosef/splade-tiny-msmarco-687c548c0691d95babf65b70
140
- - `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
141
- - `Code:` https://github.com/rasyosef/splade-tiny-msmarco
142
 
143
- ## Performance
 
 
 
144
 
145
- The splade models were evaluated on 55 thousand queries and 8.84 million documents from the [MSMARCO](https://huggingface.co/datasets/microsoft/ms_marco) dataset.
146
 
147
- ||Size (# Params)|MRR@10 (MS MARCO dev)|
148
- |:---|:----|:-------------------|
149
- |`BM25`|-|18.0|-|-|
150
- |`rasyosef/splade-tiny`|4.4M|30.9|
151
- |`rasyosef/splade-mini`|11.2M|33.2|
152
- |`rasyosef/splade-small`|28.8M|35.2|
153
- |`naver/splade-v3-distilbert`|67.0M|38.7|
154
 
155
  ## Usage
156
 
@@ -167,15 +162,15 @@ Then you can load this model and run inference.
167
  from sentence_transformers import SparseEncoder
168
 
169
  # Download from the 🤗 Hub
170
- model = SparseEncoder("rasyosef/splade-small")
171
  # Run inference
172
  queries = [
173
- "is cognac whisky",
174
  ]
175
  documents = [
176
- 'Cognac vs Whiskey. Whiskey is the alcoholic drink made from grains whereas Cognac is the alcoholic drink made from grapes. Cognac is a type of brandy. In fact, many label it as the finest of brandies. • Cognac is the brandy originating from a wine producing region of France called Cognac. While a cognac is considered an after dinner beverage that is intended to digest food, there is no such stereotyping of whiskey that can be consumed anytime of the day.',
177
- '10 hilariously insightful foreign words. One of the most obvious differences between cognac and whiskey is that cognac makers use grapes, and whiskey makers use grains. Although both processes use fermentation to create the liquors, cognac makers use a double distillation process.',
178
- 'The word whisky (or whiskey) is an anglicisation of the Classical Gaelic word uisce / uisge meaning water (now written as uisce in Irish Gaelic, and uisge in Scottish Gaelic). Distilled alcohol was known in Latin as aqua vitae (water of life).',
179
  ]
180
  query_embeddings = model.encode_query(queries)
181
  document_embeddings = model.encode_document(documents)
@@ -185,7 +180,7 @@ print(query_embeddings.shape, document_embeddings.shape)
185
  # Get the similarity scores for the embeddings
186
  similarities = model.similarity(query_embeddings, document_embeddings)
187
  print(similarities)
188
- # tensor([[22.4589, 20.5905, 10.0662]])
189
  ```
190
 
191
  <!--
@@ -212,37 +207,6 @@ You can finetune this model on your own dataset.
212
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
213
  -->
214
 
215
- ## Model Details
216
-
217
- ### Model Description
218
- - **Model Type:** SPLADE Sparse Encoder
219
- - **Base model:** [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) <!-- at revision 27575d2504e7400b5ed11f94d0e162e3e7c01af6 -->
220
- - **Maximum Sequence Length:** 512 tokens
221
- - **Output Dimensionality:** 30522 dimensions
222
- - **Similarity Function:** Dot Product
223
- <!-- - **Training Dataset:** Unknown -->
224
- <!-- - **Language:** Unknown -->
225
- <!-- - **License:** Unknown -->
226
-
227
- ### Model Sources
228
-
229
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
230
- - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
231
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
232
- - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
233
-
234
- ### Full Model Architecture
235
-
236
- ```
237
- SparseEncoder(
238
- (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
239
- (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
240
- )
241
- ```
242
-
243
- ## More
244
- <details><summary>Click to expand</summary>
245
-
246
  ## Evaluation
247
 
248
  ### Metrics
@@ -253,25 +217,25 @@ SparseEncoder(
253
 
254
  | Metric | Value |
255
  |:----------------------|:-----------|
256
- | dot_accuracy@1 | 0.5172 |
257
- | dot_accuracy@3 | 0.8368 |
258
- | dot_accuracy@5 | 0.9232 |
259
- | dot_accuracy@10 | 0.9762 |
260
- | dot_precision@1 | 0.5172 |
261
- | dot_precision@3 | 0.2867 |
262
- | dot_precision@5 | 0.1924 |
263
- | dot_precision@10 | 0.1027 |
264
- | dot_recall@1 | 0.5006 |
265
- | dot_recall@3 | 0.8238 |
266
- | dot_recall@5 | 0.9153 |
267
- | dot_recall@10 | 0.9723 |
268
- | **dot_ndcg@10** | **0.7554** |
269
- | dot_mrr@10 | 0.6877 |
270
- | dot_map@100 | 0.6829 |
271
- | query_active_dims | 29.7198 |
272
- | query_sparsity_ratio | 0.999 |
273
- | corpus_active_dims | 168.3538 |
274
- | corpus_sparsity_ratio | 0.9945 |
275
 
276
  <!--
277
  ## Bias, Risks and Limitations
@@ -291,19 +255,19 @@ SparseEncoder(
291
 
292
  #### Unnamed Dataset
293
 
294
- * Size: 1,200,000 training samples
295
- * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, and <code>label</code>
296
  * Approximate statistics based on the first 1000 samples:
297
- | | query | positive | negative_1 | negative_2 | label |
298
- |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------|
299
- | type | string | string | string | string | list |
300
- | details | <ul><li>min: 4 tokens</li><li>mean: 9.04 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 81.11 tokens</li><li>max: 215 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 77.81 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 76.2 tokens</li><li>max: 217 tokens</li></ul> | <ul><li>size: 2 elements</li></ul> |
301
  * Samples:
302
- | query | positive | negative_1 | negative_2 | label |
303
- |:-------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------|
304
- | <code>The _____________________ is a body system which consists of glands that produce hormones that act throughout the body.</code> | <code>Endocrine System. The endocrine system is made up of a group of glands that produce the body's long-distance messengers, or hormones. Hormones are chemicals that control body functions, such as metabolism, growth, and sexual development.t is made up of a group of organs that transport blood throughout the body. The heart pumps the blood and the arteries and veins transport it. Oxygen-rich blood leaves the left side of the heart and enters the biggest artery, called the aorta.</code> | <code>The endocrine system is a control system of ductless glands that secrete hormones within specific organs. Hormones act as messengers, and are carried by the bloodstream to different cells in the body, which interpret these messages and act on them.he pancreas is unusual among the body's glands in that it also has a very important endocrine function. Small groups of special cells called islet cells throughout the organ make the hormones of insulin and glucagon.</code> | <code>These glands produce different types of hormones that evoke a specific response in other cells, tissues, and/or organs located throughout the body. The hormones reach these faraway targets using the blood stream. Like the nervous system, the endocrine system is one of your body’s main communicators.he Endocrine System Essentials. 1 The endocrine system is made up of a network of glands. 2 These glands secrete hormones to regulate many bodily functions, including growth and metabolism.</code> | <code>[2.3722684383392334, 5.211579322814941]</code> |
305
- | <code>causes of low body temperature in adults</code> | <code>Hypothermia is defined as a body temperature (core, or internal body temperature) of less than about 95 F (35 C). Usually, hypothermia occurs when the body's temperature regulation is overwhelmed by a cold environment. However, in the medical and lay literature there are essentially two major classifications, accidental hypothermia and intentional hypothermia.</code> | <code>In general, a baby has a fever when their body temperature exceeds 100.4°F, or 38°C. A child has a fever when their temperature exceeds 99.5°F, or 37.5°C. An adult has a fever when their temperature exceeds 99 to 99.5°F, or 37.2 to 37.5°C.</code> | <code>Consequently, an accurate measurement of body temperature (best is rectal core temperature) of 100.4 F (38 C) or above is considered to be a fever.. A newer option includes a temperature-sensitive infrared device that measures the temperature in the skin by simply rubbing the sensor on the body.</code> | <code>[1.3747079372406006, 8.096447944641113]</code> |
306
- | <code>who is laila gifty akita</code> | <code>Lailah Gifty Akita is a Ghanaian and founder of Smart Youth Volunteers Foundation. She obtained a BSc in Renewable Natural Resources Management at Kwame Nkrumah University of Science and Technology, Kumasi-Ghana. She also had MPhil in Oceanography at the University of Ghana. She obtained a doctorate in Geosciences at International Max Planck Research School for Global Biogeochemical Cycles-Friedrich Schiller University of Jena, Germany ( June 2011 to March 2016). Lailah is an influential lady with the passion of empowering the mind of young people to make a great difference.</code> | <code>She is a PhD-student, studying Geosciences at the University of Jena, Germany. She is an enthusiastic inspirational writer. She wishes to challenge and inspire people from all walks of life to dare a greater life. You can capable of heroic deeds. Think well of yourself and act positively. You can correspond with Lailah via an email:lailah.[email protected]. https://www.goodreads.com/author/show/8297615.Lailah_Gifty_Akita/blog.</code> | <code>Also in the Talmud, the interpretation is found of rabbi Hanina ben Pappa (3rd century AD), that Lailah is an angel in charge of conception who takes a drop of semen and places it before God, saying: For R. Hanina b.</code> | <code>[2.6488447189331055, 15.058775901794434]</code> |
307
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
308
  ```json
309
  {
@@ -317,15 +281,14 @@ SparseEncoder(
317
  #### Non-Default Hyperparameters
318
 
319
  - `eval_strategy`: epoch
320
- - `per_device_train_batch_size`: 64
321
- - `per_device_eval_batch_size`: 64
322
  - `learning_rate`: 4e-05
323
  - `num_train_epochs`: 4
324
  - `lr_scheduler_type`: cosine
325
  - `warmup_ratio`: 0.025
326
  - `fp16`: True
327
  - `load_best_model_at_end`: True
328
- - `optim`: adamw_torch_fused
329
  - `push_to_hub`: True
330
 
331
  #### All Hyperparameters
@@ -335,8 +298,8 @@ SparseEncoder(
335
  - `do_predict`: False
336
  - `eval_strategy`: epoch
337
  - `prediction_loss_only`: True
338
- - `per_device_train_batch_size`: 64
339
- - `per_device_eval_batch_size`: 64
340
  - `per_gpu_train_batch_size`: None
341
  - `per_gpu_eval_batch_size`: None
342
  - `gradient_accumulation_steps`: 1
@@ -452,21 +415,18 @@ SparseEncoder(
452
  </details>
453
 
454
  ### Training Logs
455
- | Epoch | Step | Training Loss | dot_ndcg@10 |
456
- |:-------:|:---------:|:-------------:|:-----------:|
457
- | 1.0 | 18750 | 7.806 | 0.7439 |
458
- | 2.0 | 37500 | 5.7509 | 0.7520 |
459
- | **3.0** | **56250** | **4.5026** | **0.7554** |
460
- | 4.0 | 75000 | 3.909 | 0.7534 |
461
- | -1 | -1 | - | 0.7554 |
462
 
463
- * The bold row denotes the saved checkpoint.
464
 
465
  ### Framework Versions
466
- - Python: 3.11.13
467
  - Sentence Transformers: 5.1.0
468
- - Transformers: 4.55.2
469
- - PyTorch: 2.6.0+cu124
470
  - Accelerate: 1.10.0
471
  - Datasets: 4.0.0
472
  - Tokenizers: 0.21.4
@@ -539,5 +499,4 @@ SparseEncoder(
539
  ## Model Card Contact
540
 
541
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
542
- -->
543
- <details>
 
5
  - sparse
6
  - splade
7
  - generated_from_trainer
8
+ - dataset_size:800000
9
  - loss:SpladeLoss
10
  - loss:SparseMarginMSELoss
11
  - loss:FlopsLoss
12
+ base_model: yosefw/SPLADE-BERT-Small-BS256
 
13
  widget:
14
+ - text: leagues, define
15
+ - text: WATCH HOW YOU WANT. STARZ lets you stream hit original series and movies on
16
+ your favorite devices. Plus you can get the STARZ app on your smartphone or tablet
17
+ and download full movies and shows to watch off-line, anytime, anywhere. START
18
+ YOUR FREE TRIAL NOW.
19
+ - text: Furthermore, priority must be given to national jurisdiction. Pointing out
20
+ that States applied universal jurisdiction differently, he expressed concern at
21
+ the abuse of its application by some national courts, which rendered it a source
22
+ of international conflict.
23
+ - text: My sil tells me that my mil cooked the eggplant at high heat for a very long
24
+ time until it was almost burned. Is it possible that cooking it in such a way
25
+ gets rid of the bitterness? My mil bought her eggplants at the chain grocery store-
26
+ so this is not a freshness issue. Thanks for any ideas.
27
+ - text: how many tablespoons of garlic powder are in an ounce
 
 
 
 
 
 
 
 
28
  pipeline_tag: feature-extraction
29
  library_name: sentence-transformers
30
  metrics:
 
58
  type: unknown
59
  metrics:
60
  - type: dot_accuracy@1
61
+ value: 0.45475
62
  name: Dot Accuracy@1
63
  - type: dot_accuracy@3
64
+ value: 0.7685
65
  name: Dot Accuracy@3
66
  - type: dot_accuracy@5
67
+ value: 0.8785833333333334
68
  name: Dot Accuracy@5
69
  - type: dot_accuracy@10
70
+ value: 0.9484166666666667
71
  name: Dot Accuracy@10
72
  - type: dot_precision@1
73
+ value: 0.45475
74
  name: Dot Precision@1
75
  - type: dot_precision@3
76
+ value: 0.2634444444444444
77
  name: Dot Precision@3
78
  - type: dot_precision@5
79
+ value: 0.18283333333333338
80
  name: Dot Precision@5
81
  - type: dot_precision@10
82
+ value: 0.09981666666666668
83
  name: Dot Precision@10
84
  - type: dot_recall@1
85
+ value: 0.43995833333333334
86
  name: Dot Recall@1
87
  - type: dot_recall@3
88
+ value: 0.754263888888889
89
  name: Dot Recall@3
90
  - type: dot_recall@5
91
+ value: 0.867825
92
  name: Dot Recall@5
93
  - type: dot_recall@10
94
+ value: 0.942448611111111
95
  name: Dot Recall@10
96
  - type: dot_ndcg@10
97
+ value: 0.7030603243631471
98
  name: Dot Ndcg@10
99
  - type: dot_mrr@10
100
+ value: 0.6287952050264416
101
  name: Dot Mrr@10
102
  - type: dot_map@100
103
+ value: 0.625266158775858
104
  name: Dot Map@100
105
  - type: query_active_dims
106
+ value: 24.914167404174805
107
  name: Query Active Dims
108
  - type: query_sparsity_ratio
109
+ value: 0.9991837308366367
110
  name: Query Sparsity Ratio
111
  - type: corpus_active_dims
112
+ value: 171.85920667832903
113
  name: Corpus Active Dims
114
  - type: corpus_sparsity_ratio
115
+ value: 0.9943693333766356
116
  name: Corpus Sparsity Ratio
 
 
 
 
 
117
  ---
118
 
119
+ # SPLADE Sparse Encoder
120
 
121
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [yosefw/SPLADE-BERT-Small-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Small-BS256) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
122
+ ## Model Details
123
 
124
+ ### Model Description
125
+ - **Model Type:** SPLADE Sparse Encoder
126
+ - **Base model:** [yosefw/SPLADE-BERT-Small-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Small-BS256) <!-- at revision 43b8c4a930896cdbab236b2a46fe1b762216df1a -->
127
+ - **Maximum Sequence Length:** 512 tokens
128
+ - **Output Dimensionality:** 30522 dimensions
129
+ - **Similarity Function:** Dot Product
130
+ <!-- - **Training Dataset:** Unknown -->
131
+ <!-- - **Language:** Unknown -->
132
+ <!-- - **License:** Unknown -->
133
 
134
+ ### Model Sources
 
 
135
 
136
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
137
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
138
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
139
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
140
 
141
+ ### Full Model Architecture
142
 
143
+ ```
144
+ SparseEncoder(
145
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
146
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
147
+ )
148
+ ```
 
149
 
150
  ## Usage
151
 
 
162
  from sentence_transformers import SparseEncoder
163
 
164
  # Download from the 🤗 Hub
165
+ model = SparseEncoder("yosefw/SPLADE-BERT-Small-BS256-distil")
166
  # Run inference
167
  queries = [
168
+ "how many tablespoons of garlic powder are in an ounce",
169
  ]
170
  documents = [
171
+ '1 Fluid Ounce (fl oz) = 2 tablespoons 16 Tablespoons = 1 cup 16 Fluid Ounce (fl oz) = 2 cup. two ! 9.7 grams of garlic powder will be present in a tablespoon. 1 dry ounce is between 2 and 2.38 tablespoons, 16 tablespoons is incorrect. --------------------- 16 tablespoons per dry ounce. It is approximately 1/2 ounce. Usually 1/4 to 1/2 tsp.',
172
+ 'Spices, garlic powder weigh(s) 164 gram per (metric cup) or 5.47 ounce per (US cup)',
173
+ 'How many teaspoons of garlic powder equal a clove of garlic? Weigh the garlic clove and then weigh the garlic powder to make sure it is the same weight. That is how much powder equals a clove of garlic.',
174
  ]
175
  query_embeddings = model.encode_query(queries)
176
  document_embeddings = model.encode_document(documents)
 
180
  # Get the similarity scores for the embeddings
181
  similarities = model.similarity(query_embeddings, document_embeddings)
182
  print(similarities)
183
+ # tensor([[26.3104, 20.4381, 15.5539]])
184
  ```
185
 
186
  <!--
 
207
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
208
  -->
209
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
  ## Evaluation
211
 
212
  ### Metrics
 
217
 
218
  | Metric | Value |
219
  |:----------------------|:-----------|
220
+ | dot_accuracy@1 | 0.4547 |
221
+ | dot_accuracy@3 | 0.7685 |
222
+ | dot_accuracy@5 | 0.8786 |
223
+ | dot_accuracy@10 | 0.9484 |
224
+ | dot_precision@1 | 0.4547 |
225
+ | dot_precision@3 | 0.2634 |
226
+ | dot_precision@5 | 0.1828 |
227
+ | dot_precision@10 | 0.0998 |
228
+ | dot_recall@1 | 0.44 |
229
+ | dot_recall@3 | 0.7543 |
230
+ | dot_recall@5 | 0.8678 |
231
+ | dot_recall@10 | 0.9424 |
232
+ | **dot_ndcg@10** | **0.7031** |
233
+ | dot_mrr@10 | 0.6288 |
234
+ | dot_map@100 | 0.6253 |
235
+ | query_active_dims | 24.9142 |
236
+ | query_sparsity_ratio | 0.9992 |
237
+ | corpus_active_dims | 171.8592 |
238
+ | corpus_sparsity_ratio | 0.9944 |
239
 
240
  <!--
241
  ## Bias, Risks and Limitations
 
255
 
256
  #### Unnamed Dataset
257
 
258
+ * Size: 800,000 training samples
259
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>label</code>
260
  * Approximate statistics based on the first 1000 samples:
261
+ | | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
262
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
263
+ | type | string | string | string | string | string | string | list |
264
+ | details | <ul><li>min: 4 tokens</li><li>mean: 8.96 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 79.51 tokens</li><li>max: 230 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 78.09 tokens</li><li>max: 203 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 77.84 tokens</li><li>max: 215 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 76.65 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 74.67 tokens</li><li>max: 227 tokens</li></ul> | <ul><li>size: 4 elements</li></ul> |
265
  * Samples:
266
+ | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
267
+ |:----------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------|
268
+ | <code>who was president during detente</code> | <code>Détente ended after the Soviet intervention in Afghanistan, which led to the United States boycott of the 1980 Olympics in Moscow. Ronald Reagan's election as president in 1980, based in large part on an anti-détente campaign, marked the close of détente and a return to Cold War tensions.</code> | <code>Soviet Premier Alexei Kosygin (front) next to U.S. President Lyndon B. Johnson (behind) during the Glassboro Summit Conference The most obvious manifestation of détente was the series of summits held between the leaders of the two superpowers and the treaties that resulted from these meetings.</code> | <code>The activities of President Ronald Reagan returned tensions to a fever pitch. Soviet relations with the People's Republic of China Détente could probably not have taken place, and certainly wouldn't have assumed the form that it did, without the rift that developed between the world's two primary communist regimes, the Soviet Union and the People's Republic of China (PRC).</code> | <code>Détente is the easing of strained relations, especially in a political situation. The term originates in the time of the Triple Entente and Entente cordiale in reference to an easing of tensions between England and France who, subsequent to being commingled polities under Norman rule, were warring rivals for the better part of a millennium but pursuant to a policy of détente became enduring allies. In the context of the Cold War, the lessening of tensions between the East and West, along ...</code> | <code>Détente (French pronunciation: ​[detɑ̃t], meaning relaxation)[1] is the easing of strained relations, especially in political situation.</code> | <code>[1.0, 2.1879749298095703, 8.371654510498047, 10.16702938079834]</code> |
269
+ | <code>what is an ftp file</code> | <code>File Transfer Protocol (FTP) is a standard Internet protocol for transmitting files between computers on the Internet over TCP/IP connections. FTP is a client-server protocol that relies on two communications channels between client and server: a command channel for controlling the conversation and a data channel for transmitting file content. Clients initiate conversations with servers by requesting to download a file.</code> | <code>The FTP (File Transfer Protocol) utility program is commonly used for copying files to and from other computers. These computers may be at the same site or at different sites thousands of miles apart. FTP is a general protocol that works on UNIX systems as well as a variety of other (non-UNIX) systems.</code> | <code>To transfer files via File Transfer Protocol (FTP), you need to establish an FTP connection. To make an FTP connection you can use a standard Web browser (Internet Explorer, Mozilla Firefox, etc.) or an FTP Client. To transfer a file with FTP you need to have an FTP accounts for the web space you are going to transfer the file to. FTP hosting account where you plan to upload your files.</code> | <code>The Difference Between FTP Servers and File Servers. When two terms are similar and even describe a similar concept, people have a tendency to start using them interchangeably. This is definitely true in the case of FTP servers and file servers, which sound like they accomplish the same goal but in reality are two very different animals altogether.</code> | <code>The command-line secure file transfer program (sftp) and graphical SFTP clients, such as WinSCP and Fetch, use SSH2 encryption to authenticate and establish secure channels between networked hosts.</code> | <code>[1.0, 5.810799598693848, 7.961757183074951, 18.629709243774414]</code> |
270
+ | <code>what causes a t wave abnormality</code> | <code>T –wave abnormalities may not necessarily indicate the presence of a severe heart condition. There are non-specific wave changes that result from common, non-specific causes of T-wave abnormality which includes the following: 1 No obvious causes, which are usually associated with women. Fever.</code> | <code>5 Causes Of T-Wave Abnormality. T wave is basically the diagrammatically representation of ventricular polarization called electrocardiography. The structure of a T wave is like slight inverted upward stroke that follow a peak generated by R and S waves. One heart beat is represented in form of Q, R, S and T wave. The abnormality in T waves may be indicated by longer, flatter or higher peaks in the diagram. The measurement of heart beat in such a way makes it possible to diagnose heart related problems easily. If you are suffering from any heart related disease, the electrocardiogram will show an abnormality in the T wave. In this write up we will discuss various causes of abnormality in T wave measurement.</code> | <code>Specific states or conditions that cause T-wave abnormality. Complete inversions can signify the presence of cardiovascular diseases and other serious complications, which include the following: Ischemia is a condition in which oxygenated blood becomes constrained in a certain body part.</code> | <code>T Wave Abnormalities. christine m. smith. I just received a copy of an ECG I had done in Sept 1998. The preliminary report was borderline ECG and the final interpretation was Abnormal ECG. There was normal sinus rhythm and Nonspecific Anterior T abnormalities. When compared to an ECG taken in 1986 there was minimal T wave change. I have been told by many that abnormalities like this are usually no problem.</code> | <code>Prolonged Q-T interval. Long QT syndrome is a heart rhythm disorder that can cause serious irregular heart rhythms (arrhythmias). In a normal heart, your heart circulates blood throughout your body during each heartbeat. Your heart's chambers contract and relax to pump blood.</code> | <code>[1.0, 1.0, 5.3564229011535645, 11.585516929626465]</code> |
271
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
272
  ```json
273
  {
 
281
  #### Non-Default Hyperparameters
282
 
283
  - `eval_strategy`: epoch
284
+ - `per_device_train_batch_size`: 48
285
+ - `per_device_eval_batch_size`: 48
286
  - `learning_rate`: 4e-05
287
  - `num_train_epochs`: 4
288
  - `lr_scheduler_type`: cosine
289
  - `warmup_ratio`: 0.025
290
  - `fp16`: True
291
  - `load_best_model_at_end`: True
 
292
  - `push_to_hub`: True
293
 
294
  #### All Hyperparameters
 
298
  - `do_predict`: False
299
  - `eval_strategy`: epoch
300
  - `prediction_loss_only`: True
301
+ - `per_device_train_batch_size`: 48
302
+ - `per_device_eval_batch_size`: 48
303
  - `per_gpu_train_batch_size`: None
304
  - `per_gpu_eval_batch_size`: None
305
  - `gradient_accumulation_steps`: 1
 
415
  </details>
416
 
417
  ### Training Logs
418
+ | Epoch | Step | Training Loss | dot_ndcg@10 |
419
+ |:-----:|:-----:|:-------------:|:-----------:|
420
+ | 1.0 | 16667 | 8.363 | 0.6961 |
421
+ | 2.0 | 33334 | 6.5021 | 0.7031 |
422
+ | 3.0 | 50001 | 5.2209 | 0.7031 |
 
 
423
 
 
424
 
425
  ### Framework Versions
426
+ - Python: 3.12.11
427
  - Sentence Transformers: 5.1.0
428
+ - Transformers: 4.55.3
429
+ - PyTorch: 2.8.0+cu126
430
  - Accelerate: 1.10.0
431
  - Datasets: 4.0.0
432
  - Tokenizers: 0.21.4
 
499
  ## Model Card Contact
500
 
501
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
502
+ -->
 
config_sentence_transformers.json CHANGED
@@ -3,7 +3,7 @@
3
  "__version__": {
4
  "sentence_transformers": "5.1.0",
5
  "transformers": "4.55.2",
6
- "pytorch": "2.6.0+cu124"
7
  },
8
  "prompts": {
9
  "query": "",
 
3
  "__version__": {
4
  "sentence_transformers": "5.1.0",
5
  "transformers": "4.55.2",
6
+ "pytorch": "2.8.0+cu126"
7
  },
8
  "prompts": {
9
  "query": "",
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6ec00e9e7df6c50634dc89a9e490f1f787ef35fd9beccce17895323138acbedc
3
  size 115189296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c450e96023e0c4da84c4aeb3a5d84dd0eb350f3994cb68c8b37d49bf5f832f07
3
  size 115189296