• 0 Posts
  • 3 Comments
Joined 9 months ago
cake
Cake day: April 10th, 2025

help-circle

  • Not only the will is lacking, but also… you’d have to come up with near universal standards for suit hardware to software communication, and you’d have to get the … gloves or the mocap suit or a camera set up thats actually fairly cheap to manufacture, like a mouse of something.

    I’ve known people that tinker with this stuff as either hobbyists or for college courses, its certainly not impossible… but we are not really at a mass production standard phase yet.

    And… thats probably because no one has yet done a working proof of concept showing general, practical uses for this.

    But, after writing all this out…

    I am currently myself tinkering with game dev stuff, and oh boy is it hard to find decent animation files beyond the extremely basic, that dont require either significant money or time…

    But OpenCV exists, and decent webcams aren’t too expensive… and there are tools either in OpenCV or that build off of open CV, that then take your silloutte / mocap data and render it as some kind of game character model/skeletal animation, or at least do parts of that process.

    I know its possible to do at least halfway ok mocap just with cams and no suit now, but I don’t know if that works without feeding it to an AI datacenter for processing, or if it can run in realtime on a laptop.

    If the latter is the case, … well then I may take my own shot at it, if nothing else, just to mocap myself for some more interesting game anims.

    Beyond that, there is a gigantic free dataset of mocap data from Carnegie Mellon University, but jesus h christ, its a barely documented mess, snd its all raw, mo cap point cloud data, converting it all into something more broadly useful, like an fbx format, on a standard, root normalized skeleton… and breaking it down into distinct, specific movements… that’d be a lot if work.

    Like, teams of people levels of work, if you want an actually very easily useful library, in under 5 years time.

    I did manage to more recently find a paper that had well formed and cleaner data specific to mocapping karatekas and their techniques… but yeah, generally all that shit is either paywalled, or basically a barely structured mess.


  • I mean… yes, some of what is described here already exists, its basically advanced VR controls.

    Scanning a human visually ala the MSFT Kinect is a thing you can do for maybe a rough estimate of overall body position, but to be highly accurate you basically need at least two cameras, and/or an ai cluster to process thosr images in high quality.

    hopefully we can agree that no solarpunk society is going to have an earth destroying data center soley for rendering your HiD inputs correctly.

    That or you could use LiDAR, but that shits expensive, but also Tesla cars not having that is why they suck so much.

    probably a much more practical solution is basically a vr suit, kinda like a slimmed down mocap suit, or accessories like gloves and such, with accelerometers and gyrosocpes for tracking joints and digits independently…

    which basically is what are already used to remote control humanoid robots.

    walking around is always going to be… weird.

    it is possible to just wear anklets snd knee pads with accelerometers and such as well, but if the screen is strapped to your face, your gonna walk into a wall or trip over something or get fatigued or nauseous eventually.

    Yeah, just have modular kits of werable, digit/joint/ major body section acceleromators and gyroscopes, they all plug in to a small backpack like thing, or maybe frontpack or w/e, that has a wifi transceiver.


    So uh, long story short: a lot of this tech already exists and is in various niche use cases or hobby spaces… but uh, making say a general OS that works via basically a bunch of dancing and gestures?

    Thats probably a significantly more difficult thing to achieve.

    Its not a tech problem so much as it is s conceptual problem. How do you even do that, replace every possible thing you can do with a mouse and keyboard?

    Meta still can’t even figure out inverse kinematic simulation of a cartoon avatar’s legs convincingly.

    Apple’s attempt at their VR controls was basically an early, early alpha, very limited and limiting, just didn’t really work.


    … but uh, lets please not go about lobotomizing ourselves with brain chips, i dont see any reality where that, as a general use paradigm, is not utterly horrifying.

    in todays news, 1/3 of humanity got an actual mind virus due to a combined flaw in bluetooth, and an exploit in redis, and they’ve mostly all hsd their brainchips overheat and/or explode.

    our following story tonight: blink once to vote yes on soylent greening them all, blink twice to vote no. now please drink a verification can to continue, failure to do so will result in immediate TOS violation and revocation of all anti brain malware support