This is the Offical weights of ConFiDeNet

Installation

pip3 install git+https://github.com/Onkarsus13/transformers.git@confidenet
from PIL import Image
import torch
from transformers import ConFiDeNetForDepthEstimation, ConFiDeNetImageProcessor

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

image = Image.open("<Image Path>").convert("RGB")
print(image.size)
# image.save("image.jpg")

image_processor = ConFiDeNetImageProcessor.from_pretrained("onkarsus13/ConFiDeNet-Large-VQ-32")
model = ConFiDeNetForDepthEstimation.from_pretrained("onkarsus13/ConFiDeNet-Large-VQ-32").to(device)

inputs = image_processor(images=image, return_tensors="pt").to(device)

with torch.no_grad():
    outputs = model(**inputs)

post_processed_output = image_processor.post_process_depth_estimation(
    outputs, target_sizes=[(image.height, image.width)],
)

depth = post_processed_output[0]["predicted_depth_uint16"].detach().cpu().numpy()
depth = Image.fromarray(depth, mode="I;16")
depth.save("depth.png")
Downloads last month
2
Safetensors
Model size
1.0B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for onkarsus13/ConFiDeNet-Large-VQ-32

Finetuned
(23)
this model