This video has good stuff in it. I promise it is worth the watch.

  • invertedspear@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    I’m going to say we’re actually heading in this direction, though it will ultimately be different. We haven’t really been using touch screens all that long, and we’re still figuring out things. What’s more valuable than an app icon? One that also tells you the date, or how many emails you have. We’re just starting to delve into widgets, live tiles, and contextually sensitive icons. Maybe we have an agenda widget, what it does when you tap on it changes based on the time. 5 min before or after you have to leave to make it to your appointment, the tap opens maps with the route already up. 5 min before or after the start time, the app opens what ever meeting tool you using or your phone app and connects you to the meeting. All other times it opens your calendar. That’s what we could do with an LCARS-type dynamic interface. The major difference is in how we use computers today vs how we used them when LCARS was dreamed up. Back then it was all about the flow of data, so all the context sensitivity in LCARS was about routing and flow. It would be much more PDA driven if reimagined today.

    So I see a future where something like LCARS makes intuitive sense, but it would suite our way of using computers and not be so focused on data routing and flow.

    Also helm control being LCARS would be terrible. Better to have a pilot with HOTAS controls and a navigator using LCARS, or else just have the ship limited to very slow bulky movements, and HOTAS in the shuttles and fighters. Maybe humans could adapt to touch screen piloting, but I don’t see how with so little feedback.