Doubt about screen to world space node

I’ve been experimenting with this node and I can’t make sense of the output values, I input the screen coors of my mouse, and the resulting coordinates range from 0 to a huge value, over 100 units, while my camera is 1 unit wide.


The first column of coordinates represents the position of my mouse cursor, that right now is where the blue arrow points. The second set is the output of the screen to world space node. Here everything is as expected. The camera is around 38 units up and looking straight down.

However, here with the cursor roughly halfway through the screen horizontally you can see that the X value of the output raises a lot, since my camera is 1 unit wide I expected it to range from 0 to 1 units.

¿Can someone explain me how this works and how to use it?

Sorry if this is poorly worded, I’m in a hurry and I can’t think of better words. Thanks for your patience

(Edited due to a mistake on my part)

The output of the “Screen to World Space” node seems incorrect.
I think it’s better to report on github.

I’ll report it then.
By the way, while trying to figure out how to do the same with haxe I checked the Pick surface node and I noticed a piece of code (physics.pickClosest), where could I check what it does?

0kay, just relized I’m an idiot, the scale may be off, but it’s consistent, on the first image the mouse was at x=950, not 650.
Guess I can just go ahead if I figure out a scalar vector.

Coordinate transformation is very difficult and confusing.

It is not “Screen to World” information but “World to Screen” information, but important things are written here.

This information is useful because the “Screen to World” conversion is the opposite of the “World to Screen” conversion, but it doesn’t seem to work well just by taking this into account.

1 Like

Thanks, I’ll see if i can figure it out when I get the time.