Transformers
	
	
	
	
	GGUF
	
	
			
	
			
	
	
		
	
	text-generation-inference
	
	
	
		
	
	unsloth
	
	
	
	
	mistral
	
	
	
	
	Mistral_Star
	
	
	
	
	Mistral_Quiet
	
	
	
	
	Mistral
	
	
	
	
	Mixtral
	
	
	
	
	Question-Answer
	
	
	
	
	Token-Classification
	
	
	
	
	Sequence-Classification
	
	
	
	
	SpydazWeb-AI
	
	
	
	
	chemistry
	
	
	
	
	biology
	
	
	
	
	legal
	
	
	
	
	code
	
	
	
	
	climate
	
	
	
	
	medical
	
	
	
	
	LCARS_AI_StarTrek_Computer
	
	
	
	
	chain-of-thought
	
	
	
	
	tree-of-knowledge
	
	
	
	
	forest-of-thoughts
	
	
	
	
	visual-spacial-sketchpad
	
	
	
	
	alpha-mind
	
	
	
	
	knowledge-graph
	
	
	
	
	entity-detection
	
	
	
	
	encyclopedia
	
	
	
	
	wikipedia
	
	
	
	
	stack-exchange
	
	
	
	
	Reddit
	
	
	
	
	Cyber-series
	
	
	
	
	MegaMind
	
	
	
	
	Cybertron
	
	
	
	
	SpydazWeb
	
	
	
	
	Spydaz
	
	
	
	
	LCARS
	
	
	
	
	star-trek
	
	
	
	
	mega-transformers
	
	
	
	
	Mulit-Mega-Merge
	
	
	
	
	Multi-Lingual
	
	
	
	
	Afro-Centric
	
	
	
	
	African-Model
	
	
	
	
	Ancient-One
	
	
	
	
	llama-cpp
	
	
	
	
	gguf-my-repo
	
	
	
	
	conversational
	
	
LeroyDyer/SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF
This model was converted to GGUF format from LeroyDyer/SpydazWeb_AI_HumanAGI_002 using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo LeroyDyer/SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF --hf-file spydazweb_ai_humanagi_002-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo LeroyDyer/SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF --hf-file spydazweb_ai_humanagi_002-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo LeroyDyer/SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF --hf-file spydazweb_ai_humanagi_002-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo LeroyDyer/SpydazWeb_AI_HumanAGI_002-Q4_K_M-GGUF --hf-file spydazweb_ai_humanagi_002-q4_k_m.gguf -c 2048
- Downloads last month
 - 17
 
							Hardware compatibility
						Log In
								
								to view the estimation
4-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support