AI & ML interests

Uncensored AI that's actually Uncensored

Recent Activity

darkc0de  updated a Space about 1 hour ago
xortron/chat
Roman190928  updated a dataset about 1 month ago
xortron/SYS2RReasoning-125000
Roman190928  published a dataset about 1 month ago
xortron/SYS2RReasoning-125000
View all activity

Fishtiks 
posted an update about 1 month ago
view post
Post
1726
Have extra processing power in downtime? Have old devices you haven't used in years? I recommend primarily Folding@home to do protein folding on your GPUs, but also BOINC, particularly on Android and Apple devices, because of the lower power usage. I've been doing this, and get about 14,000 hours a week in, primarily for mapping cancer markers on BOINC on an Aiyara cluster of Androids. I also hold a sign out by the highway encouraging people to join BOINC. It was Dylan Bucci, a young promoter of BOINC on school's computers, who wished before he died to get as many people on as possible to do this, and in his honor, the Dylan Bucci challenge was implemented. No reason to wait for a challenge. If you care about such things, there is an associated cryptocurrency for such processing, but it's worth it to save lives.

I look forward to AI-related endeavors like this, and only know of NATIX Drive&, Acurast, and HYRA AI, all of which use Androids I'd rather devote to BOINC. However, they also allow one to be paid, and to totally devote old devices to the processing. On the same topic, DexPOINT monetizes your Android's Internet connection.

BOINC runs on Android, Apple, PCs of all sorts, Pi devices, Chrome devices, Fire Sticks, TV boxes, Android watches with the full OS, and all sorts of things that have Android or the ability to run Linux, although it will also run on Windows. Folding@home works best on PCs with modern NVIDIA GPUs, and in a cool room. You can also run BOINC on modern computers, but they must be throttled, because they often get too hot.
  • 4 replies
·
Fishtiks 
posted an update about 1 month ago
view post
Post
599
When you have good goals, along the lines of what will inevitably be implemented, the initial response is an uproar of "Why?!," followed by lazy attempts, and then, someone actually doing it right. Now, say you're the only driving force behind getting it right, and the rest want some other resource, like money. They will get the money, but not the influence, and the one with the influence, if pure in their goals, doesn't need the money, so ends up poor. I'm a ghostwriter. This is the ghostwriter's dilemma. I get things right often, but I can't touch the things. Shouldn't being right more often than the people that touch this stuff, alone, afford me access? Well, we're working on it.
morongosteve 
updated a Space 4 months ago
darkc0de 
updated a Space 7 months ago
darkc0de 
published a Space 7 months ago
John6666 
posted an update 10 months ago
view post
Post
30566
If your Space stops working after restarting mainly for the last 5 days (https://discuss.huggingface.co/t/my-space-suddenly-went-offline-the-cpu-cannot-restart/151121/22), try some of following.
1. Add pydantic==2.10.6 to requirements.txt or upgrade Gradio to the latest version.
2. Upgrade PyTorch to 2.2.0 or later (torch>=2.2.0 for Zero GPU space).
3. Fix Transformers to 4.49.0 or earlier (transformers<=4.49.0for spaces using Transformers or Diffusers).
4. Fix huggingface_hub to the old version (huggingface_hub==0.25.2 for if an error like cached_download is not available occurs or inference does not work properly)
5. Specifying WORKDIR in Dockerfile may cause the application to fail to start with error 137. (Docker Spaces, https://discuss.huggingface.co/t/error-code-137-cache-error/152177)

About pydantic==2.10.6:
https://discuss.huggingface.co/t/error-no-api-found/146226
https://discuss.huggingface.co/t/internal-server-error-bool-not-iterable/149494

Edit:
Zero GPU space has been upgraded from A100 to H200.
This is likely the reason why older versions of PyTorch are no longer supported.
In fact, an error message to that effect was displayed.
zero-gpu-explorers/README#163
  • 2 replies
·
Fishtiks 
posted an update 10 months ago
view post
Post
1623
I want to process AI for free. I know about Hyra AI, Acurast, NATIX, and some other stuff you can do on your phone. I mean that I want to process toward your projects for free on my computer. I can do a little now, but I can do much more if I'm able to upgrade (nobody is telling me where they're getting H100s, but I may be able to get custom cards from the source). I was curious if any distributed processing is being done with PC and HPC, like BOINC and Folding@home, but specifically for AI, and I figured this is the place to ask.

What projects can you recommend to put my CPU and GPU to use until I potentially get a dual CPU, dual to triple custom GPU, custom NPU, and mini-OPU setup, like Jean Zay, but smaller? I don't have that many resources to put to use currently, but I have more than the Androids I'm using for my Aiyara cluster for BOINC, so help me use the gaming PC for something more useful than gaming. I had somewhat promised that I'd offer the new setup to process for others, but I'm starting before I may even get it.
·
John6666 
posted an update 11 months ago
John6666 
posted an update over 1 year ago
view post
Post
26136
@victor @not-lain There has been a sudden and unusual outbreak of spam postings on the HF Forum that seem to be aimed at relaying online videos and commenting on them. It is also spanning multiple languages for some reason. I've flagged it too, but I'm not sure if the staff will be able to keep up with the manual measures in the future.
·
John6666 
posted an update over 1 year ago
view post
Post
24009
@victor Sorry for the repetitiveness.

I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.

Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.
Yntec ( @Yntec ) discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.

The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.

Thank you in advance.

John6666/blitz_diffusion_error
John6666/GPU-stresser-t2i-error
ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']

·
John6666 
posted an update over 1 year ago
view post
Post
2856
@victor

Excuse me.
I would like to report the following bug or new specification that is probably the cause of the fatal stacks that are occurring in the Zero GPU space throughout HF.
Thanks.

zero-gpu-explorers/README#104
  • 3 replies
·