html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2644
Batched `map` not allowed to return 0 items
Sorry to ping you, @lhoestq, did you have a chance to take a look at the proposed PR? Thank you!
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
20
Batched `map` not allowed to return 0 items ## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa...
[ -0.23283597826957703, -0.39038851857185364, -0.044456783682107925, 0.1644899994134903, -0.10854469239711761, -0.06876874715089798, 0.1163392886519432, 0.4056323170661926, 0.6400412917137146, 0.11788103729486465, -0.05657173693180084, 0.24608521163463593, -0.39544159173965454, 0.10435476154...
https://github.com/huggingface/datasets/issues/2644
Batched `map` not allowed to return 0 items
Yes and it's all good, thank you :) Feel free to close this issue if it's good for you
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
19
Batched `map` not allowed to return 0 items ## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa...
[ -0.23283597826957703, -0.39038851857185364, -0.044456783682107925, 0.1644899994134903, -0.10854469239711761, -0.06876874715089798, 0.1163392886519432, 0.4056323170661926, 0.6400412917137146, 0.11788103729486465, -0.05657173693180084, 0.24608521163463593, -0.39544159173965454, 0.10435476154...
https://github.com/huggingface/datasets/issues/2643
Enum used in map functions will raise a RecursionError with dill.
I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ...
27
Enum used in map functions will raise a RecursionError with dill. ## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to d...
[ 0.10536061972379684, 0.1992543488740921, 0.04053209349513054, 0.14069916307926178, -0.07437153905630112, -0.07834332436323166, 0.14776058495044708, 0.233779639005661, 0.12631773948669434, 0.1690645068883896, 0.14533288776874542, 0.933216392993927, -0.29086965322494507, -0.3072982728481293,...
https://github.com/huggingface/datasets/issues/2643
Enum used in map functions will raise a RecursionError with dill.
Hi ! Thanks for reporting :) Until this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py There is already a suggestion in this message about how to do it: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 Let me know if such a worka...
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ...
61
Enum used in map functions will raise a RecursionError with dill. ## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to d...
[ 0.10536061972379684, 0.1992543488740921, 0.04053209349513054, 0.14069916307926178, -0.07437153905630112, -0.07834332436323166, 0.14776058495044708, 0.233779639005661, 0.12631773948669434, 0.1690645068883896, 0.14533288776874542, 0.933216392993927, -0.29086965322494507, -0.3072982728481293,...
https://github.com/huggingface/datasets/issues/2642
Support multi-worker with streaming dataset (IterableDataset).
Hi ! This is a great idea :) I think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. Regarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a second ste...
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **D...
58
Support multi-worker with streaming dataset (IterableDataset). **Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.ma...
[ -0.6293720602989197, -0.5262188911437988, -0.14244061708450317, -0.0420491099357605, -0.19271785020828247, -0.010430209338665009, 0.5165581703186035, 0.17827998101711273, 0.08271829783916473, 0.14804649353027344, -0.08475559949874878, 0.302012175321579, -0.26877012848854065, 0.266904830932...
https://github.com/huggingface/datasets/issues/2641
load_dataset("financial_phrasebank") NonMatchingChecksumError
Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download. Similar issue [here](https://github.com/huggingface/datasets/issues/2646)
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financi...
35
load_dataset("financial_phrasebank") NonMatchingChecksumError ## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_all...
[ -0.12216169387102127, 0.2191510796546936, -0.07840734720230103, 0.2884526550769806, 0.22036103904247284, 0.13471724092960358, 0.06588906049728394, 0.3310612142086029, 0.20957423746585846, 0.18217237293720245, -0.14143840968608856, 0.05338772013783455, 0.16741584241390228, 0.075015231966972...
https://github.com/huggingface/datasets/issues/2641
load_dataset("financial_phrasebank") NonMatchingChecksumError
Hi ! Loading the dataset works on my side as well. Feel free to try again and let us know if it works for you know
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financi...
26
load_dataset("financial_phrasebank") NonMatchingChecksumError ## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_all...
[ -0.11983358860015869, 0.17941008508205414, -0.08838161081075668, 0.3396991491317749, 0.18796873092651367, 0.1173863410949707, 0.006067337468266487, 0.3996657133102417, 0.14194689691066742, 0.19768892228603363, -0.16138429939746857, 0.24372711777687073, 0.23040372133255005, -0.1089785173535...
https://github.com/huggingface/datasets/issues/2641
load_dataset("financial_phrasebank") NonMatchingChecksumError
Thank you! I've been trying periodically for the past month, and no luck yet with this particular dataset. Just tried again and still hitting the checksum error. Code: `dataset = load_dataset("financial_phrasebank", "sentences_allagree") ` Traceback: ``` ----------------------------------------------------...
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financi...
174
load_dataset("financial_phrasebank") NonMatchingChecksumError ## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_all...
[ -0.12190382182598114, 0.25043556094169617, -0.06328260153532028, 0.329198956489563, 0.16332073509693146, 0.13060161471366882, 0.029430221766233444, 0.39818495512008667, 0.10427402704954147, 0.10010836273431778, -0.23702996969223022, 0.2519070506095886, 0.2408212274312973, -0.18088668584823...
https://github.com/huggingface/datasets/issues/2630
Progress bars are not properly rendered in Jupyter notebook
To add my experience when trying to debug this issue: Seems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message from a forke...
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc...
44
Progress bars are not properly rendered in Jupyter notebook ## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress...
[ 0.18173721432685852, 0.026652369648218155, -0.00488420482724905, 0.05655066296458244, 0.11842598021030426, -0.34836241602897644, 0.5567898750305176, 0.4173838794231415, -0.259396493434906, 0.002457247581332922, -0.21256695687770844, 0.6025976538658142, 0.2658940255641937, 0.167215734720230...
https://github.com/huggingface/datasets/issues/2630
Progress bars are not properly rendered in Jupyter notebook
Hi @mludv, thanks for the hint!!! :) We will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`...
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc...
28
Progress bars are not properly rendered in Jupyter notebook ## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress...
[ 0.08407517522573471, -0.017447425052523613, -0.055343180894851685, 0.1467486023902893, 0.14180544018745422, -0.2826744616031647, 0.43655160069465637, 0.3081710934638977, -0.2813870906829834, 0.133517786860466, -0.1734684258699417, 0.495898574590683, 0.30211323499679565, 0.36064350605010986...
https://github.com/huggingface/datasets/issues/2629
Load datasets from the Hub without requiring a dataset script
This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `da...
20
Load datasets from the Hub without requiring a dataset script As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to b...
[ -0.45757561922073364, -0.04470089450478554, -0.04213665425777435, 0.1991642862558365, -0.12621942162513733, 0.16351664066314697, 0.37726446986198425, 0.17870429158210754, 0.413285493850708, 0.13889150321483612, -0.23918093740940094, 0.3387719392776489, -0.0637953132390976, 0.54941439628601...
https://github.com/huggingface/datasets/issues/2622
Integration with AugLy
Hi, you can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows: ```python dset = load_dataset("imdb", split="train") # Let's say we are working with the IMDB dataset dset.set_transform(lambda ex: {"text": augly_text_augmentati...
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m...
68
Integration with AugLy **Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries ...
[ -0.1749233603477478, -0.19208082556724548, -0.1434520184993744, -0.23967842757701874, 0.19126343727111816, 0.026355573907494545, 0.16739149391651154, 0.3273123502731323, -0.3234902620315552, 0.016006890684366226, 0.04472559690475464, -0.017034923657774925, -0.26089558005332947, 0.110500201...
https://github.com/huggingface/datasets/issues/2618
`filelock.py` Error
Hi @liyucheng09, thanks for reporting. Apparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide fil...
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) ...
84
`filelock.py` Error ## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOC...
[ 0.13904431462287903, -0.3473012149333954, 0.015147035010159016, 0.041861023753881454, 0.030276546254754066, 0.09070464223623276, 0.16235928237438202, 0.13566692173480988, 0.059238236397504807, 0.09362246841192245, -0.10108538717031479, 0.3403194844722748, -0.03296109661459923, -0.424065470...
https://github.com/huggingface/datasets/issues/2615
Jsonlines export error
For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
19
Jsonlines export error ## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to re...
[ -0.35041284561157227, 0.13335852324962616, -0.006901639513671398, 0.3097095191478729, 0.03890353813767433, 0.08054833114147186, 0.2188645452260971, 0.3883451819419861, 0.0006040323642082512, -0.040230438113212585, 0.32310038805007935, 0.03459472954273224, 0.0673164650797844, 0.044698063284...
https://github.com/huggingface/datasets/issues/2615
Jsonlines export error
@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue?
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
29
Jsonlines export error ## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to re...
[ -0.3529365062713623, 0.13444532454013824, -0.013970937579870224, 0.2699108421802521, 0.09136582911014557, 0.08057711273431778, 0.2758992910385132, 0.42092111706733704, -0.0012248815037310123, -0.05468733608722687, 0.343974232673645, 0.01683129370212555, 0.10500957816839218, 0.1663784980773...
https://github.com/huggingface/datasets/issues/2615
Jsonlines export error
@TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https://github.com/pandas-dev/pandas/pull/36898
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
20
Jsonlines export error ## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to re...
[ -0.3412722945213318, 0.12403075397014618, -0.005501963198184967, 0.25801771879196167, 0.10834053158760071, 0.07570438086986542, 0.30358684062957764, 0.40695032477378845, 0.026413382962346077, -0.04118327796459198, 0.27453967928886414, 0.006749006453901529, 0.13871614634990692, 0.1741789132...
https://github.com/huggingface/datasets/issues/2615
Jsonlines export error
Sorry, I was also talking to teven offline so I already had the PR ready before noticing x)
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
18
Jsonlines export error ## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to re...
[ -0.37644094228744507, 0.062102582305669785, -0.024925492703914642, 0.286175400018692, 0.04461922496557236, 0.08094958961009979, 0.15584604442119598, 0.397174209356308, 0.051058780401945114, -0.04763747751712799, 0.3336995542049408, 0.004116065334528685, 0.08341795951128006, 0.1582753062248...
https://github.com/huggingface/datasets/issues/2615
Jsonlines export error
I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he/she is still working on it before overtaking it... 😄
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
35
Jsonlines export error ## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to re...
[ -0.3961874544620514, 0.14125007390975952, -0.04477722570300102, 0.26520514488220215, 0.05529014393687248, 0.06807655096054077, 0.15589264035224915, 0.3812045454978943, -0.01108888816088438, -0.03500496968626976, 0.3722906708717346, 0.011863995343446732, 0.0816400870680809, 0.17778830230236...
https://github.com/huggingface/datasets/issues/2607
Streaming local gzip compressed JSON line files is not working
Hi @thomwolf, thanks for reporting. It seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj.read_json`) to read the data without using the Python standard `open(file,...` (which is the one patched with `xopen` to work in streaming mode). This has to be fixed.
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
49
Streaming local gzip compressed JSON line files is not working ## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_...
[ -0.19924888014793396, -0.13397808372974396, -0.009144364856183529, 0.23913481831550598, 0.07776414602994919, 0.016879990696907043, 0.3976743817329407, 0.5178916454315186, 0.19702094793319702, 0.07899829745292664, 0.10300148278474808, 0.3306828737258911, -0.08836667239665985, 0.020874120295...
https://github.com/huggingface/datasets/issues/2607
Streaming local gzip compressed JSON line files is not working
Sorry for reopening this, but I'm having the same issue as @thomwolf when streaming a gzipped JSON Lines file from the hub. Or is that just not possible by definition? I installed `datasets`in editable mode from source (so probably includes the fix from #2608 ?): ``` >>> datasets.__version__ '1.9.1.dev0' ``` `...
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
167
Streaming local gzip compressed JSON line files is not working ## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_...
[ -0.19924888014793396, -0.13397808372974396, -0.009144364856183529, 0.23913481831550598, 0.07776414602994919, 0.016879990696907043, 0.3976743817329407, 0.5178916454315186, 0.19702094793319702, 0.07899829745292664, 0.10300148278474808, 0.3306828737258911, -0.08836667239665985, 0.020874120295...
https://github.com/huggingface/datasets/issues/2607
Streaming local gzip compressed JSON line files is not working
Hi ! To make the streaming work, we extend `open` in the dataset builder to work with urls. Therefore you just need to use `open` before using `gzip.open`: ```diff - with gzip.open(file, "rt", encoding="utf-8") as f: + with gzip.open(open(file, "rb"), "rt", encoding="utf-8") as f: ``` You can see that it is t...
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
61
Streaming local gzip compressed JSON line files is not working ## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_...
[ -0.19924888014793396, -0.13397808372974396, -0.009144364856183529, 0.23913481831550598, 0.07776414602994919, 0.016879990696907043, 0.3976743817329407, 0.5178916454315186, 0.19702094793319702, 0.07899829745292664, 0.10300148278474808, 0.3306828737258911, -0.08836667239665985, 0.020874120295...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Hi ! If we want something more general, we could either 1. delete the extracted files after the arrow data generation automatically, or 2. delete each extracted file during the arrow generation right after it has been closed. Solution 2 is better to save disk space during the arrow generation. Is it what you had...
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
129
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.04222412407398224, -0.052078187465667725, -0.1375647336244583, 0.2491360902786255, -0.08902016282081604, 0.1040986031293869, -0.11982809007167816, 0.1847195029258728, 0.23423108458518982, 0.2407584935426712, 0.15672893822193146, 0.5178811550140381, -0.34048914909362793, -0.0508521869778...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Also, if I delete the extracted files they need to be re-extracted again instead of loading from the Arrow cache files
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
21
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.028062475845217705, -0.12684546411037445, -0.14626386761665344, 0.15173640847206116, -0.17558768391609192, 0.19438324868679047, -0.2197524607181549, 0.25154927372932434, 0.17995022237300873, 0.16384993493556976, 0.10564211755990982, 0.48138663172721863, -0.24072237312793732, -0.10407443...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
I think we already opened an issue about this topic (suggested by @stas00): duplicated of #2481? This is in our TODO list... 😅
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
23
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.019210129976272583, -0.12697599828243256, -0.1481829136610031, 0.07984667271375656, -0.1321689933538437, 0.1765158623456955, -0.19870246946811676, 0.24316397309303284, 0.21240122616291046, 0.2022784799337387, 0.16154208779335022, 0.4751737117767334, -0.22158890962600708, -0.070333912968...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
I think the deletion of each extracted file could be implemented in our CacheManager and ExtractManager (once merged to master: #2295, #2277). 😉
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
23
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.022601110860705376, -0.1121034249663353, -0.1641409993171692, 0.163986474275589, -0.1408299058675766, 0.15682125091552734, -0.19473403692245483, 0.28730660676956177, 0.2321142554283142, 0.17758804559707642, 0.09998287260532379, 0.4532558023929596, -0.23331499099731445, -0.10648301243782...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Nevermind @thomwolf, I just mentioned the other issue so that both appear linked in GitHub and we do not forget to close both once we make the corresponding Pull Request... That was the main reason! 😄
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
36
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.08552709221839905, -0.06718851625919342, -0.18447397649288177, 0.10478703677654266, -0.06817679107189178, 0.09409269690513611, -0.20691712200641632, 0.35798078775405884, 0.19299151003360748, 0.18033039569854736, 0.1256365329027176, 0.5178527235984802, -0.19906553626060486, -0.0532917715...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Ok yes. I think this is an important feature to be able to use large datasets which are pretty much always compressed files. In particular now this requires to keep the extracted file on the drive if you want to avoid reprocessing the dataset so in my case, this require using always ~400GB of drive instead of just 2...
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
116
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.06597820669412613, -0.06750478595495224, -0.12144386768341064, 0.13619430363178253, -0.18816892802715302, 0.2371562123298645, -0.1601298451423645, 0.2473282366991043, 0.16247081756591797, 0.1584106832742691, 0.10431958734989166, 0.44837719202041626, -0.3028530776500702, -0.1058056801557...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Note that I'm confirming that with the current master branch of dataset, deleting extracted files (without deleting the arrow cache file) lead to **re-extracting** these files when reloading the dataset instead of directly loading the arrow cache file.
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
38
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.16383492946624756, 0.03803050518035889, -0.1447685807943344, 0.1780930459499359, -0.15926943719387054, 0.23571422696113586, -0.17465701699256897, 0.28733715415000916, 0.1265912652015686, 0.15852248668670654, 0.06744439899921417, 0.5160759687423706, -0.22754135727882385, -0.0690878629684...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
Hi ! That's weird, it doesn't do that on my side (tested on master on my laptop by deleting the `extracted` folder in the download cache directory). You tested with one of the files at https://huggingface.co/datasets/thomwolf/github-python that you have locally ?
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
41
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ -0.03325678035616875, -0.0617312528192997, -0.1518050581216812, 0.2321406900882721, -0.07944172620773315, 0.1899539679288864, -0.1519193798303604, 0.3226889669895172, 0.2679438591003418, 0.185911163687706, -0.00866090040653944, 0.4785899817943573, -0.20648188889026642, 0.01982061192393303,...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
@thomwolf I'm sorry but I can't reproduce this problem. I'm also using: ```python ds = load_dataset("json", split="train", data_files=data_files, cache_dir=cache_dir) ``` after having removed the extracted files: ```python assert sorted((cache_dir / "downloads" / "extracted").iterdir()) == [] ``` I get the l...
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
49
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ 0.038739703595638275, 0.0011508585885167122, -0.07317280769348145, 0.28882575035095215, -0.022597581148147583, 0.24423477053642273, 0.0043714321218431, 0.22520151734352112, 0.21111585199832916, 0.11122642457485199, 0.13000966608524323, 0.4447043836116791, -0.3149457275867462, -0.1522340625...
https://github.com/huggingface/datasets/issues/2604
Add option to delete temporary files (e.g. extracted files) when loading dataset
> > > Do you confirm the extracted folder stays empty after reloading? Yes, I have the above mentioned assertion on the emptiness of the extracted folder: ```python assert sorted((cache_dir / "downloads" / "extracted").iterdir()) == [] ```
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
37
Add option to delete temporary files (e.g. extracted files) when loading dataset I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Havi...
[ 0.04130737483501434, -0.04853765666484833, -0.1449722796678543, 0.22482965886592865, -0.04660547152161598, 0.20178936421871185, -0.16908562183380127, 0.3027114272117615, 0.22450824081897736, 0.13429401814937592, 0.010950164869427681, 0.4801865220069885, -0.26869168877601624, -0.12304327636...
https://github.com/huggingface/datasets/issues/2598
Unable to download omp dataset
Hi @erikadistefano , thanks for reporting the issue. I have created a Pull Request that should fix it. Once merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset.
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual r...
52
Unable to download omp dataset ## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and pr...
[ -0.26337599754333496, -0.11635936796665192, -0.08252890408039093, 0.10050966590642929, 0.23119954764842987, -0.021743202582001686, 0.20523110032081604, 0.2830140292644501, 0.04831267520785332, 0.20946404337882996, -0.20779238641262054, 0.5524357557296753, -0.2091248780488968, 0.23590213060...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
Hi ! Do you have an example in mind that shows how this could be useful ?
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
17
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). Hi ! Do you have an example in mind that shows how this could be useful ?
[ -0.4188196659088135, -0.11185524612665176, -0.02110976167023182, 0.1649782806634903, 0.44314804673194885, -0.07371680438518524, 0.5200479626655579, 0.02551070787012577, -0.00009148131357505918, -0.06128968670964241, 0.13835787773132324, 0.3554060161113739, -0.36642858386039734, 0.181685060...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
Example: Merge 2 datasets into one datasets Label extraction from dataset dataset(text, label) —> dataset(text, newlabel) TextCleaning. For image dataset, Transformation are easier (ie linear algebra). > On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote: > >  > Hi ! Do you have an example in...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
83
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). Example: Merge 2 datasets into one datasets Label extraction from dataset dataset(text, label) —> dataset(text, newlabel...
[ -0.48543158173561096, 0.15293057262897491, -0.08864050358533859, 0.18962767720222473, 0.4006461501121521, 0.1068505272269249, 0.5075284242630005, 0.12447446584701538, -0.011683245189487934, -0.06204480677843094, 0.05338345095515251, 0.33453184366226196, -0.3507428467273712, 0.1875824928283...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`. You can find examples in the documentation here: https://huggingface.co/docs/datasets/processing.html You can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for ...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
41
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`. You can find exam...
[ -0.4685247838497162, -0.23068057000637054, -0.05315738916397095, 0.14103932678699493, 0.37909868359565735, -0.19871503114700317, 0.3367202579975128, 0.16955861449241638, 0.1045028567314148, 0.1931907683610916, -0.17467616498470306, 0.28939753770828247, -0.2316427379846573, 0.55936282873153...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
Ok, sure. Thanks for pointing on functional part. My question is more “Philosophical”/Design perspective. There are 2 perspetive: Add transformation methods to Dataset Class OR Create a Transformer Class which operates on Dataset Class. T(Dataset) —> Dataset datasetnew = MyTransform.transform(dataset)...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
142
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). Ok, sure. Thanks for pointing on functional part. My question is more “Philosophical”/Design perspective. There are 2 perspe...
[ -0.10654173791408539, -0.0538383424282074, 0.026053549721837044, 0.14303100109100342, 0.3365960121154785, -0.13995900750160217, 0.5725405216217041, -0.03391559422016144, -0.052186112850904465, 0.16315801441669464, 0.11685654520988464, 0.24485844373703003, -0.462017297744751, 0.432704538106...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc. I guess if you find any transform that could be useful for text dataset processing, image da...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
64
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform h...
[ -0.490424782037735, 0.08723032474517822, -0.19068123400211334, -0.08399514853954315, 0.3473586440086365, -0.1753203272819519, 0.3381088078022003, 0.2574363350868225, 0.09037769585847855, 0.01979086734354496, 0.2580317258834839, 0.36186277866363525, -0.3850794732570648, 0.47672149538993835,...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
Thanks for reply. What would be the constraints to have Dataset —> Dataset consistency ? Main issue would be larger than memory dataset and serialization on disk. Technically, one still process at atomic level and try to wrap the full results into Dataset…. (!) What would you think ? > On Jul 7, 2021, at 16...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
155
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). Thanks for reply. What would be the constraints to have Dataset —> Dataset consistency ? Main issue would be larger than mem...
[ -0.42738857865333557, 0.20382387936115265, -0.08927103132009506, 0.06771692633628845, 0.5223817229270935, -0.13401207327842712, 0.3225385546684265, 0.19111192226409912, 0.013091424480080605, 0.061537954956293106, 0.2266189157962799, 0.3506036400794983, -0.41034483909606934, 0.2984673976898...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
We can be pretty flexible and not impose any constraints for transforms. Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM....
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
59
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). We can be pretty flexible and not impose any constraints for transforms. Moreover, this library is designed to support data...
[ -0.5401788949966431, -0.018992431461811066, -0.14400826394557953, 0.1232815608382225, 0.43724945187568665, -0.2170049548149109, 0.24531258642673492, 0.1936606913805008, 0.21922753751277924, 0.13049159944057465, 0.06151597201824188, 0.1877676397562027, -0.26032596826553345, 0.07238705456256...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
Ok thanks. But, Dataset has various flavors. In current design of Dataset, how the serialization on disk is done (?) The main issue is serialization of newdataset= Transform(Dataset) (ie thats why am referring to Out Of memory dataset…): Should be part of Transform or part of dataset ? Maybe, not, sin...
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
162
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). Ok thanks. But, Dataset has various flavors. In current design of Dataset, how the serialization on disk is done (?) The...
[ -0.30282866954803467, -0.07524682581424713, -0.05205000564455986, 0.25686150789260864, 0.49632033705711365, 0.033301062881946564, 0.3464807868003845, 0.13595689833164215, 0.10846932232379913, -0.048908695578575134, 0.14332546293735504, 0.18079474568367004, -0.2497083991765976, 0.0995198264...
https://github.com/huggingface/datasets/issues/2596
Transformer Class on dataset
I'm not sure I understand, could you elaborate a bit more please ? Each dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk. We have an ArrowWriter and ArrowReader class to write/read arrow tables on disk or in in-memory buffers.
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
55
Transformer Class on dataset Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit). I'm not sure I understand, could you elaborate a bit more please ? Each dataset is a wrapper of a PyArrow Table that contai...
[ -0.36044779419898987, -0.05612464249134064, 0.011653267778456211, 0.27049779891967773, 0.39460867643356323, -0.09719637036323547, 0.3508630394935608, 0.04822322726249695, -0.10002893209457397, -0.16110995411872864, 0.1317933350801468, 0.3815635144710541, -0.3154996633529663, -0.08339153975...
https://github.com/huggingface/datasets/issues/2595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Hi @profsatwinder. It looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_da...
27
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 ...
[ -0.4680480659008026, -0.2621823251247406, -0.04225946217775345, -0.12496829777956009, 0.2041027545928955, 0.14506219327449799, 0.4402434229850769, 0.25430920720100403, 0.22203561663627625, 0.13481761515140533, -0.21101908385753632, 0.20888751745224, -0.27976709604263306, -0.078061029314994...
https://github.com/huggingface/datasets/issues/2595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again.
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_da...
17
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 ...
[ -0.3636058568954468, -0.3833411633968353, -0.018118280917406082, -0.13243871927261353, 0.15852582454681396, 0.17342989146709442, 0.40831348299980164, 0.27742037177085876, 0.19420109689235687, 0.09595779329538345, -0.1962038278579712, 0.21372216939926147, -0.28575167059898376, -0.0307675804...
https://github.com/huggingface/datasets/issues/2591
Cached dataset overflowing disk space
I'm using the datasets concatenate dataset to combine the datasets and then train. train_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
18
Cached dataset overflowing disk space I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way t...
[ 0.037559643387794495, -0.4222128093242645, 0.09859009832143784, 0.4514758288860321, 0.10117408633232117, 0.20505647361278534, 0.074485644698143, 0.14190475642681122, 0.14452829957008362, 0.055637177079916, 0.40107935667037964, -0.2280428111553192, -0.12429968267679214, 0.11271926760673523,...
https://github.com/huggingface/datasets/issues/2591
Cached dataset overflowing disk space
Hi @BirgerMoell. You have several options: - to set caching to be stored on a different path location, other than the default one (`~/.cache/huggingface/datasets`): - either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location - or by passing it with the parameter `cach...
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
127
Cached dataset overflowing disk space I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way t...
[ 0.037559643387794495, -0.4222128093242645, 0.09859009832143784, 0.4514758288860321, 0.10117408633232117, 0.20505647361278534, 0.074485644698143, 0.14190475642681122, 0.14452829957008362, 0.055637177079916, 0.40107935667037964, -0.2280428111553192, -0.12429968267679214, 0.11271926760673523,...
https://github.com/huggingface/datasets/issues/2591
Cached dataset overflowing disk space
Hi @BirgerMoell, We are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604). We will ping you once this feature is implemented, so that the size of your cache directory will b...
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
56
Cached dataset overflowing disk space I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way t...
[ 0.037559643387794495, -0.4222128093242645, 0.09859009832143784, 0.4514758288860321, 0.10117408633232117, 0.20505647361278534, 0.074485644698143, 0.14190475642681122, 0.14452829957008362, 0.055637177079916, 0.40107935667037964, -0.2280428111553192, -0.12429968267679214, 0.11271926760673523,...
https://github.com/huggingface/datasets/issues/2585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
Hi @mmajurski, thanks for reporting this issue. Indeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces. I'm going to fix our script so that all lead...
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['P...
71
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index ## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location spe...
[ -0.23303000628948212, -0.32953783869743347, -0.044109899550676346, 0.37493598461151123, 0.08782324194908142, -0.07634200155735016, 0.10575704276561737, 0.2279035747051239, -0.2927468419075012, 0.15778344869613647, -0.08416294306516647, 0.04506871476769447, 0.3250357210636139, 0.06854541599...
https://github.com/huggingface/datasets/issues/2585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider. There are occasional double spaces separating words which it might be nice to get rid of. Either way, thank you.
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['P...
41
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index ## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location spe...
[ -0.23303000628948212, -0.32953783869743347, -0.044109899550676346, 0.37493598461151123, 0.08782324194908142, -0.07634200155735016, 0.10575704276561737, 0.2279035747051239, -0.2927468419075012, 0.15778344869613647, -0.08416294306516647, 0.04506871476769447, 0.3250357210636139, 0.06854541599...
https://github.com/huggingface/datasets/issues/2583
Error iteration over IterableDataset using Torch DataLoader
Hi ! This is because you first need to format the dataset for pytorch: ```python >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) >>> torch_iterable_dataset = dataset.with_format("torch") >>> assert isinstance...
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh...
93
Error iteration over IterableDataset using Torch DataLoader ## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch Iter...
[ -0.17598839104175568, -0.33258265256881714, -0.012141015380620956, 0.2365848422050476, 0.1552753895521164, 0.020412376150488853, 0.43117448687553406, 0.0018050287617370486, -0.16896097362041473, 0.24600963294506073, 0.09634552150964737, 0.24420931935310364, -0.33548101782798767, -0.4065968...
https://github.com/huggingface/datasets/issues/2583
Error iteration over IterableDataset using Torch DataLoader
Thank you for that and the example! What you said makes total sense; I just somehow missed that and assumed HF IterableDataset was a subclass of Torch IterableDataset.
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh...
28
Error iteration over IterableDataset using Torch DataLoader ## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch Iter...
[ -0.17598839104175568, -0.33258265256881714, -0.012141015380620956, 0.2365848422050476, 0.1552753895521164, 0.020412376150488853, 0.43117448687553406, 0.0018050287617370486, -0.16896097362041473, 0.24600963294506073, 0.09634552150964737, 0.24420931935310364, -0.33548101782798767, -0.4065968...
https://github.com/huggingface/datasets/issues/2573
Finding right block-size with JSON loading difficult for user
This was actually a second error arising from a too small block-size in the json reader. Finding the right block size is difficult for the layman user
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
27
Finding right block-size with JSON loading difficult for user As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383) This was actually a second error arising from a too small block-size in the json reade...
[ 0.0956072136759758, 0.0020218619611114264, -0.22474884986877441, 0.40687301754951477, 0.13413351774215698, -0.08703842014074326, 0.35157862305641174, 0.3959694802761078, 0.505370557308197, 0.4565081000328064, 0.26593467593193054, -0.09846661239862442, 0.022460540756583214, -0.0638387650251...
https://github.com/huggingface/datasets/issues/2569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
Hi @suzyahyah, thanks for reporting. The message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is: ``` Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_hea...
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical ...
167
Weights of model checkpoint not initialized for RobertaModel for Bertscore When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_...
[ -0.059550341218709946, -0.36452993750572205, 0.09010986238718033, 0.1289697140455246, 0.4273764193058014, 0.07482700794935226, 0.2718203663825989, 0.12423470616340637, 0.12511076033115387, 0.13339757919311523, -0.14913855493068695, 0.21721382439136505, -0.1455291211605072, -0.0643484815955...
https://github.com/huggingface/datasets/issues/2569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case).
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical ...
38
Weights of model checkpoint not initialized for RobertaModel for Bertscore When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_...
[ -0.15063084661960602, -0.29753467440605164, 0.07905333489179611, 0.05135172978043556, 0.46475544571876526, 0.040576472878456116, 0.19793567061424255, 0.19078676402568817, 0.13858537375926971, 0.20844976603984833, -0.13037726283073425, 0.2869921028614044, -0.12756791710853577, -0.0866212993...
https://github.com/huggingface/datasets/issues/2561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
Hi ! I just tried to reproduce what you said: - create a local builder class - use `load_dataset` - update the builder class code - use `load_dataset` again (with or without `ignore_verifications=True`) And it creates a new cache, as expected. What modifications did you do to your builder's code ?
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
51
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` ## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`...
[ -0.3804817199707031, 0.4886573255062103, 0.03263238072395325, 0.193564310669899, 0.144663468003273, 0.1507261097431183, 0.3192700147628784, 0.3748849034309387, 0.12904877960681915, 0.2183319479227066, 0.21496087312698364, 0.36303719878196716, 0.11989084631204605, -0.2817021906375885, 0.0...
https://github.com/huggingface/datasets/issues/2561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected.
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
48
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` ## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`...
[ -0.2966100573539734, 0.4845605492591858, 0.04970746859908104, 0.1848863661289215, 0.0803922787308693, 0.21865548193454742, 0.2824614644050598, 0.37450700998306274, 0.1949125975370407, 0.17794625461101532, 0.2753896117210388, 0.35446998476982117, 0.08640958368778229, -0.2514213025569916, ...
https://github.com/huggingface/datasets/issues/2561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache. You could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact ...
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
82
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` ## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`...
[ -0.3809875249862671, 0.4386592507362366, -0.0031961980275809765, 0.17506973445415497, 0.0950205847620964, 0.15524223446846008, 0.23175182938575745, 0.41996970772743225, 0.15906161069869995, 0.309204638004303, 0.16211599111557007, 0.28301164507865906, 0.11000045388936996, -0.208207711577415...
https://github.com/huggingface/datasets/issues/2561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
Hi @apsdehal, If you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged: ```python from datasets import load_dataset_builder dataset...
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
73
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` ## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`...
[ -0.43786269426345825, 0.4782145619392395, -0.007668428122997284, 0.11805299669504166, 0.15692700445652008, 0.18813155591487885, 0.2960323095321655, 0.4827200472354889, 0.17503754794597626, 0.3437815308570862, 0.22528555989265442, 0.3112470209598541, 0.06439507752656937, -0.1951678097248077...
https://github.com/huggingface/datasets/issues/2559
Memory usage consistently increases when processing a dataset with `.map`
Hi ! Can you share the function you pass to `map` ? I know you mentioned it would be hard to share some code but this would really help to understand what happened
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch...
33
Memory usage consistently increases when processing a dataset with `.map` ## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `D...
[ -0.15703536570072174, -0.1291140913963318, 0.007089510094374418, 0.42113929986953735, 0.18093499541282654, -0.006408083718270063, 0.05896928906440735, 0.11856141686439514, 0.21515002846717834, 0.13143660128116608, 0.40360957384109497, 0.5179601907730103, -0.18737539649009705, -0.0342991352...
https://github.com/huggingface/datasets/issues/2554
Multilabel metrics not supported
Hi @GuillemGSubies, thanks for reporting. I have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L...
25
Multilabel metrics not supported When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17...
[ -0.2212650030851364, -0.17884747684001923, 0.004477391485124826, 0.30089911818504333, 0.6725276708602905, -0.13599036633968353, 0.48068612813949585, -0.10233516991138458, 0.22830823063850403, 0.3166794180870056, -0.1240743100643158, 0.29819267988204956, -0.2808498740196228, 0.4046332240104...
https://github.com/huggingface/datasets/issues/2553
load_dataset("web_nlg") NonMatchingChecksumError
Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today. I just pushed a fix at #2558 - this shouldn't happen anymore in the future.
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['h...
31
load_dataset("web_nlg") NonMatchingChecksumError Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Ch...
[ -0.19279494881629944, 0.12983907759189606, -0.13637611269950867, 0.040642425417900085, 0.2651570737361908, 0.008678526617586613, 0.13199782371520996, 0.4511447548866272, 0.2963729500770569, 0.08619622886180878, -0.06551362574100494, 0.27239641547203064, 0.022088488563895226, -0.02552179619...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
Two questions: - with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`? - I don't really understand why the id is duplicated in the code...
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
66
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.009492403827607632, 0.004908621311187744, -0.10378047823905945, 0.3735044002532959, 0.04693065583705902, -0.05958148464560509, 0.23553311824798584, 0.2376767098903656, 0.05807110294699669, 0.04209858179092407, -0.08941294252872467, 0.4072093665599823, -0.15770073235034943, 0.10433845967...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https://github.com/huggingface/datasets/pull/2555 To help users debug this kind of errors we could try to show a message like this ```pyt...
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
97
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.039389826357364655, -0.014791853725910187, -0.09282350540161133, 0.3366980254650116, 0.08731978386640549, -0.013939643278717995, 0.20039069652557373, 0.2712211608886719, 0.08827383071184158, 0.06673934310674667, -0.08619927614927292, 0.393449991941452, -0.16239213943481445, 0.1329391002...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
and are we sure there are not a lot of datasets which are now broken with this change?
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
18
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.04690854996442795, -0.027153147384524345, -0.1106119155883789, 0.349099338054657, 0.08127691596746445, -0.015619395300745964, 0.18118061125278473, 0.2770839333534241, 0.08159126341342926, 0.05441868305206299, -0.059907130897045135, 0.3916928470134735, -0.18466131389141083, 0.11661632359...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
Thanks to the dummy data, we know for sure that most of them work as expected. `code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if...
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
61
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.036426711827516556, -0.020195595920085907, -0.09643834829330444, 0.3276671767234802, 0.0761767104268074, -0.013212069869041443, 0.20809926092624664, 0.29120180010795593, 0.09849930554628372, 0.07118485122919083, -0.06703731417655945, 0.3835591673851013, -0.17116571962833405, 0.136399805...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
I found one issue on `fever` (PR here: https://github.com/huggingface/datasets/pull/2557) All the other ones seem fine :)
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
16
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.020208142697811127, 0.0007681514834985137, -0.09075668454170227, 0.3496333658695221, 0.0851370319724083, -0.024872198700904846, 0.19917982816696167, 0.2581067681312561, 0.09322965145111084, 0.07367496937513351, -0.09278659522533417, 0.3846125304698944, -0.17428533732891083, 0.1145088076...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
Hi! Got same error when loading other dataset: ```python3 load_dataset('wikicorpus', 'raw_en') ``` tb: ```pytb --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) /opt/conda/lib/python3.8/site-packages/datasets...
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
91
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.050656478852033615, -0.026531191542744637, -0.09670862555503845, 0.33766722679138184, 0.09382715821266174, -0.005184992216527462, 0.20963668823242188, 0.27232256531715393, 0.09786691516637802, 0.08467810600996017, -0.07676653563976288, 0.37975865602493286, -0.18557733297348022, 0.131250...
https://github.com/huggingface/datasets/issues/2552
Keys should be unique error on code_search_net
The wikicorpus issue has been fixed by https://github.com/huggingface/datasets/pull/2844 We'll do a new release of `datasets` soon :)
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
17
Keys should be unique error on code_search_net ## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] ...
[ -0.07428094744682312, -0.0351794995367527, -0.08668220043182373, 0.35816046595573425, 0.06232630833983421, -0.03549264371395111, 0.16847532987594604, 0.2639014422893524, 0.09276624768972397, 0.12361404299736023, -0.044513899832963943, 0.37844181060791016, -0.15191598236560822, 0.1347223073...
https://github.com/huggingface/datasets/issues/2549
Handling unlabeled datasets
Hi @nelson-liu, You can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset If you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L...
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"...
55
Handling unlabeled datasets Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_la...
[ 0.05881330743432045, 0.07810116559267044, 0.13162529468536377, 0.3273364305496216, 0.25035133957862854, 0.1380116492509842, 0.7898841500282288, -0.1512291133403778, 0.057231687009334564, 0.14990782737731934, -0.20007795095443726, 0.5356727838516235, -0.2677469849586487, 0.24402230978012085...
https://github.com/huggingface/datasets/issues/2548
Field order issue in loading json
Hi @luyug, thanks for reporting. The good news is that we fixed this issue only 9 days ago: #2507. The patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0. Feel free to reopen the issue if the problem persists.
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= data...
52
Field order issue in loading json ## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` ...
[ 0.18250906467437744, 0.23169340193271637, -0.015927579253911972, 0.21059878170490265, 0.32902008295059204, -0.06300381571054459, 0.22351810336112976, 0.4288537800312042, 0.009034413844347, 0.022668354213237762, 0.07545371353626251, 0.6574709415435791, 0.3253486156463623, -0.012466421350836...
https://github.com/huggingface/datasets/issues/2547
Dataset load_from_disk is too slow
Hi ! It looks like an issue with the virtual disk you are using. We load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk). However there happens to be issues with virtual ...
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t...
121
Dataset load_from_disk is too slow @lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa...
[ -0.4744631052017212, -0.3652116656303406, -0.09103222191333771, 0.5078070759773254, 0.2949647307395935, 0.0777808204293251, 0.08729571104049683, 0.2941162586212158, 0.5636070966720581, 0.11371616274118423, 0.1030271127820015, 0.31005117297172546, -0.03649459034204483, -0.13101865351200104,...
https://github.com/huggingface/datasets/issues/2547
Dataset load_from_disk is too slow
Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently?
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t...
45
Dataset load_from_disk is too slow @lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa...
[ -0.31963685154914856, -0.5587561130523682, -0.062026649713516235, 0.5160351395606995, 0.23286059498786926, 0.020378487184643745, 0.12462975829839706, 0.1379353106021881, 0.5566673278808594, 0.11270618438720703, -0.06148020923137665, 0.3583334982395172, -0.015343185514211655, 0.211330503225...
https://github.com/huggingface/datasets/issues/2547
Dataset load_from_disk is too slow
There are no solutions yet unfortunately. We're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t...
26
Dataset load_from_disk is too slow @lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa...
[ -0.4694783687591553, -0.24230213463306427, -0.11789412796497345, 0.394116073846817, 0.24218131601810455, 0.05294777452945709, 0.19930587708950043, 0.26577091217041016, 0.4733239710330963, 0.10509912669658661, 0.053388431668281555, 0.44948869943618774, -0.02447594329714775, 0.01359662320464...
https://github.com/huggingface/datasets/issues/2543
switching some low-level log.info's to log.debug?
Hi @stas00, thanks for pointing out this issue with logging. I agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code.
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - da...
41
switching some low-level log.info's to log.debug? In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a t...
[ 0.1725226193666458, -0.34119707345962524, 0.08715631067752838, 0.2024422436952591, 0.25608471035957336, 0.11248690634965897, 0.5520289540290833, 0.31011226773262024, -0.07186204940080643, -0.3018714189529419, -0.04163791239261627, 0.17572860419750214, -0.21787194907665253, 0.28082922101020...
https://github.com/huggingface/datasets/issues/2542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
Hi @VictorSanh, thank you for reporting this issue with duplicated keys. - The issue with "adversarial_qa" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch. - I am investigating the issue with `drop`. I'll ping you to keep you informed.
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results Th...
45
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` ## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("...
[ -0.0905088484287262, -0.11600686609745026, 0.009455200284719467, 0.4709075391292572, 0.08408752828836441, 0.028658684343099594, 0.3028094172477722, 0.23976510763168335, 0.12779688835144043, 0.18143528699874878, -0.14392127096652985, 0.5057469606399536, -0.04829593747854233, 0.1198931485414...
https://github.com/huggingface/datasets/issues/2542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0.
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results Th...
22
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` ## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("...
[ -0.0905088484287262, -0.11600686609745026, 0.009455200284719467, 0.4709075391292572, 0.08408752828836441, 0.028658684343099594, 0.3028094172477722, 0.23976510763168335, 0.12779688835144043, 0.18143528699874878, -0.14392127096652985, 0.5057469606399536, -0.04829593747854233, 0.1198931485414...
https://github.com/huggingface/datasets/issues/2538
Loading partial dataset when debugging
Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk. Then when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `"train[:10]"`),...
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a wa...
98
Loading partial dataset when debugging I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ...
[ -0.36718904972076416, -0.14316895604133606, -0.021608319133520126, 0.36220434308052063, -0.06753049045801163, 0.27566948533058167, 0.574037492275238, 0.41680431365966797, 0.26127564907073975, -0.08351020514965057, -0.08284469693899155, 0.14032061398029327, -0.17392075061798096, 0.182089030...
https://github.com/huggingface/datasets/issues/2538
Loading partial dataset when debugging
Hi @reachtarunhere. Besides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely ...
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a wa...
71
Loading partial dataset when debugging I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ...
[ -0.42554771900177, -0.10989382117986679, -0.034853313118219376, 0.3032790720462799, -0.07041159272193909, 0.28239259123802185, 0.5759149193763733, 0.4311378598213196, 0.30114316940307617, -0.08975882828235626, -0.09464352577924728, 0.17002831399440765, -0.167522132396698, 0.238328427076339...
https://github.com/huggingface/datasets/issues/2538
Loading partial dataset when debugging
Thanks all for responding. Hey @albertvillanova Thanks. Yes, I would be interested. @lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated d...
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a wa...
85
Loading partial dataset when debugging I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ...
[ -0.41591259837150574, -0.14814642071723938, -0.012523077428340912, 0.35715124011039734, -0.09925161302089691, 0.27749383449554443, 0.5004062056541443, 0.4125296175479889, 0.3049233853816986, -0.16910290718078613, -0.09666934609413147, 0.12072679400444031, -0.11352930217981339, 0.2518862485...
https://github.com/huggingface/datasets/issues/2532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance i...
18
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging exa...
[ -0.1536964476108551, 0.0790131539106369, 0.0737818107008934, -0.038231730461120605, 0.09215747565031052, -0.32468152046203613, -0.08998160809278488, 0.10603179782629013, -0.461760938167572, 0.19867414236068726, -0.20853734016418457, 0.30742090940475464, 0.2766764760017395, 0.03088500909507...
https://github.com/huggingface/datasets/issues/2532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**? Oh, I am sorry I would reopen the post on huggingface/transformers
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance i...
30
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging exa...
[ -0.1536964476108551, 0.0790131539106369, 0.0737818107008934, -0.038231730461120605, 0.09215747565031052, -0.32468152046203613, -0.08998160809278488, 0.10603179782629013, -0.461760938167572, 0.19867414236068726, -0.20853734016418457, 0.30742090940475464, 0.2766764760017395, 0.03088500909507...
https://github.com/huggingface/datasets/issues/2522
Documentation Mistakes in Dataset: emotion
Hi, this issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side.
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight b...
20
Documentation Mistakes in Dataset: emotion As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dat...
[ 0.22127074003219604, -0.3492063581943512, -0.10875467956066132, 0.576239287853241, 0.267790824174881, 0.17377594113349915, 0.29420924186706543, 0.11103971302509308, -0.12832535803318024, 0.2038235068321228, -0.12744274735450745, 0.04295259714126587, -0.2023334801197052, -0.1131509989500045...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call. Why do you change an attribute of your tokenizer when `__getstate__` is called ?
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
46
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
@lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get > Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized So I remove the Custom PreTokenizer in `__getstate__` and then rest...
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
121
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend.
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
29
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
Doing a deep copy results in the warning: > 06/20/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'=<function tokenize_function at 0x7f1e95f05d40> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms a...
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
114
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
Looks like there is still an object that is not pickable in your `tokenize_function` function. You can test if an object can be pickled and hashed by using ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` Under the hood it pickles the object to compute its hash, so it calls `__g...
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
52
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error. I'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I r...
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
126
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2516
datasets.map pickle issue resulting in invalid mapping function
I'm glad you figured something out :) Regarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes ...
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m...
61
datasets.map pickle issue resulting in invalid mapping function I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it...
[ -0.22488734126091003, 0.19312112033367157, 0.1127530038356781, 0.028775125741958618, 0.03419579938054085, -0.32377052307128906, 0.13853639364242554, 0.19508472084999084, 0.346808522939682, -0.10001835227012634, 0.19218143820762634, 0.7253856658935547, -0.19296035170555115, 0.20866984128952...
https://github.com/huggingface/datasets/issues/2514
Can datasets remove duplicated rows?
Hi ! For now this is probably the best option. We might add a feature like this in the feature as well. Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ? Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting fo...
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like*...
63
Can datasets remove duplicated rows? **Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. ...
[ 0.14068499207496643, -0.16548238694667816, -0.16508835554122925, 0.15400472283363342, 0.0489431768655777, 0.23949095606803894, 0.27690914273262024, 0.06343916058540344, -0.33765140175819397, 0.019205207005143166, -0.03411052003502846, 0.23469799757003784, 0.11909009516239166, 0.20621468126...
https://github.com/huggingface/datasets/issues/2514
Can datasets remove duplicated rows?
Yes, I'd like to work on this feature once I'm done with #2500, but first I have to do some research, and see if the implementation wouldn't be too complex. In the meantime, maybe [this lib](https://github.com/TomScheffers/pyarrow_ops) can help. However, note that this lib operates directly on pyarrow tables and rel...
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like*...
80
Can datasets remove duplicated rows? **Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. ...
[ 0.10441092401742935, 0.01838838681578636, -0.1353912502527237, 0.0949539914727211, 0.033135101199150085, 0.14071302115917206, 0.3450477421283722, 0.2536356747150421, -0.6958327293395996, 0.0018973195692524314, -0.0936519131064415, 0.37214285135269165, 0.03419315442442894, 0.165042921900749...
https://github.com/huggingface/datasets/issues/2514
Can datasets remove duplicated rows?
> Hi ! For now this is probably the best option. > We might add a feature like this in the feature as well. > > Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ? > Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be l...
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like*...
119
Can datasets remove duplicated rows? **Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. ...
[ 0.10129871219396591, -0.19105306267738342, -0.13927024602890015, 0.28646939992904663, 0.07518839836120605, 0.20366136729717255, 0.2715584337711334, 0.09866946190595627, -0.33350053429603577, -0.016334347426891327, -0.0008099075639620423, 0.28262972831726074, 0.040186598896980286, 0.1977717...
https://github.com/huggingface/datasets/issues/2511
Add C4
Update on this: I'm computing the checksums of the data files. It will be available soon
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [h...
16
Add C4 ## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be...
[ -0.2586114704608917, -0.23689334094524384, -0.2156256139278412, 0.07441923022270203, 0.20855382084846497, -0.087480328977108, 0.1077217236161232, 0.30312982201576233, 0.06430105119943619, 0.32313570380210876, -0.21327082812786102, 0.06667786091566086, -0.15428952872753143, 0.43328449130058...
https://github.com/huggingface/datasets/issues/2508
Load Image Classification Dataset from Local
Hi ! Is this folder structure a standard, a bit like imagenet ? In this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ? ```python from datasets import load_dataset my_custom_cifar = load_dataset("cifar_like", data_dir="path/to/data/dir") ``...
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load th...
48
Load Image Classification Dataset from Local **Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of e...
[ -0.17115196585655212, -0.1667463779449463, 0.03135296329855919, 0.45320066809654236, 0.26325491070747375, -0.0811183974146843, 0.22850817441940308, 0.05178449675440788, 0.31946441531181335, 0.26813292503356934, -0.19803541898727417, 0.09343159943819046, -0.3347459137439728, 0.3943923413753...
https://github.com/huggingface/datasets/issues/2508
Load Image Classification Dataset from Local
@lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing.
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load th...
25
Load Image Classification Dataset from Local **Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of e...
[ -0.2203829437494278, -0.17356444895267487, 0.037992410361766815, 0.46598926186561584, 0.19796615839004517, -0.05095729976892471, 0.18018637597560883, 0.07256569713354111, 0.3113460838794708, 0.24052415788173676, -0.14534765481948853, 0.1418924778699875, -0.2927549481391907, 0.2843792736530...
https://github.com/huggingface/datasets/issues/2508
Load Image Classification Dataset from Local
Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub. ```python from datasets import load_dataset ds = load_dataset('nateraw/image-folder', data_files='PetImages/') ```
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load th...
33
Load Image Classification Dataset from Local **Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of e...
[ -0.20274943113327026, -0.18869063258171082, 0.01317841187119484, 0.40116360783576965, 0.2593051493167877, -0.027974193915724754, 0.19759726524353027, 0.1451178640127182, 0.3119896948337555, 0.28576549887657166, -0.19353516399860382, 0.12354744970798492, -0.30474144220352173, 0.327020823955...
https://github.com/huggingface/datasets/issues/2503
SubjQA wrong boolean values in entries
@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA We are going to contact the dataset owners to report this.
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are...
27
SubjQA wrong boolean values in entries ## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from ques...
[ 0.1235659047961235, 0.06930442899465561, 0.027031617239117622, 0.16452011466026306, -0.33659592270851135, -0.006898868829011917, 0.086507149040699, 0.045239534229040146, -0.09279773384332657, 0.18643911182880402, -0.09195936471223831, 0.4202955663204193, 0.21497048437595367, 0.254909873008...
https://github.com/huggingface/datasets/issues/2503
SubjQA wrong boolean values in entries
I have: - opened an issue in their repo: https://github.com/megagonlabs/SubjQA/issues/3 - written an email to all the paper authors
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are...
19
SubjQA wrong boolean values in entries ## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from ques...
[ 0.1710737645626068, 0.058735623955726624, 0.044212210923433304, 0.09914498031139374, -0.3185442090034485, -0.10436014086008072, 0.09934596717357635, 0.024408308789134026, -0.14664700627326965, 0.22348684072494507, -0.061402592808008194, 0.3840121924877167, 0.22751615941524506, 0.2279734760...
https://github.com/huggingface/datasets/issues/2499
Python Programming Puzzles
Thanks @VictorSanh! There's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scro...
17
Python Programming Puzzles ## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/P...
[ -0.0690370425581932, -0.1561272144317627, -0.27598685026168823, -0.052750758826732635, -0.00064777274383232, 0.0701649859547615, 0.01956307888031006, 0.2562190890312195, 0.003462387016043067, 0.12764178216457367, 0.03861632198095322, 0.3095811903476715, -0.3876287043094635, 0.3575745522975...
https://github.com/huggingface/datasets/issues/2498
Improve torch formatting performance
That’s interesting thanks, let’s see what we can do. Can you detail your last sentence? I’m not sure I understand it well.
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an...
22
Improve torch formatting performance **Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ec...
[ -0.35013502836227417, -0.2402968555688858, -0.07366806268692017, 0.19035720825195312, 0.10275429487228394, 0.14770673215389252, 0.2599892020225525, 0.6868165135383606, -0.1211305633187294, -0.06412012130022049, -0.3566420376300812, 0.24314066767692566, -0.11459874361753464, -0.033222105354...
https://github.com/huggingface/datasets/issues/2498
Improve torch formatting performance
Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now: ```python import pyarrow as pa # I used pyarrow 3.0.0 import numpy as np n, max_length = 1_000, 512 low, high, size = 0, 2 << 16, (n, max_length) table = pa.Table.from_pydict({ "input_ids": np.random.default_rng(42).in...
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an...
150
Improve torch formatting performance **Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ec...
[ -0.35013502836227417, -0.2402968555688858, -0.07366806268692017, 0.19035720825195312, 0.10275429487228394, 0.14770673215389252, 0.2599892020225525, 0.6868165135383606, -0.1211305633187294, -0.06412012130022049, -0.3566420376300812, 0.24314066767692566, -0.11459874361753464, -0.033222105354...