Archive for the ‘Software’ Category (feed)

“Artificial” Intelligence is a myth

As I wrote in the past, my first job out of college was to work in an artificial intelligence project. The idea back then among engineers in the field was that, given enough smart programming and a tree-like knowledge database, we would eventually manage to create true artificial intelligence (AI). When I was working on the project, I truly believed in it. It was the coolest thing in my life up to that point! As the years went by, I realized that AI via that model, is nothing but a fantasy. A fantasy circulating in the scientific circles since the late 1960s.

These days, there are a lot of people who have gone obsessed with the idea of the singularity (e.g. Ray Kurzweil), while others are sh1tting their pants about how “disastrous” that could be for the human race (Elon Musk being one of them).

Artificial intelligence has progressed since the 1990s, since now it’s not modeled around programming the knowledge directly into it, but it “learns” via the Internet. Meaning, by analyzing behaviors and actions of internet users, Google can “guess” what’s the most logical action for their AI to take for each given situation (crowd-sourcing the intelligence). Crowd-sourcing is a far smarter way to have your AI “learn”, than the “static” ways of teaching an AI system word-by-word as in the past.

However, even with crowd-sourcing, we will hit a wall in what AI can do for us via that method. It can surely become as “smart” as the Star Trek’s Enterprise computer, which is not very smart. What we really want when we talk about AI, is Mr Data, not the rather daft Enterprise computer. So we need a new model to base the design of our machines. Crowd-sourcing comes close, but it’s an exogenous way of evolution (because it takes decisions based on actions already taken by the people it crowd-sources from), and so it can never evolve into true consciousness. True consciousness means free will, so a crowd-sourcing algorithm can never be “free”, it will always be a slave to the actions it was modeled around for. In other words, that crowd-sourcing AI would always behave “too robotic”.

It took me 20 years to realize why AI hasn’t work so far, and why our current models will never work. The simple reason being: just like energy, you can’t create consciousness from nothing, but you can always divide and share the existing one.

The idea for this came during my first lucid dream almost exactly 2 years ago, when I first “met” my Higher Self (who names himself Heva or Shiva). In my encounter with him, which I described in detail here, I left one bit of information out from that blog post, because I promised him that I won’t tell anyone. I think it’s time to break that promise though, because it’s the right time for that information to be shared. Basically, when I met him, he had a laptop with him, and in its monitor you could see a whole load of live statistics about me: anger, vitality, love, fear etc etc etc. Please don’t laugh on why a “spiritual entity” had a… laptop. You must understand that dreams are symbolic and are modeled around the brain of the dreamer (in this case, a human’s dream, who is only familiar with human tech). So what looked like a laptop to me, from his point of view, was some 5D ultra-supa tech that would look like psychedelic colors, who knows? So anyway, he definitely tried to hide from me that screen. I wasn’t supposed to look at it. Oops.

For a few months, I was angry with that revelation: “Why am I being monitored?”, “why am I not truly free?”, “am I just a sack of nuts & bolts to him?”.

Eventually, during my quest to these mystical topics, I realized what I am: I’m an instance of consciousness, expressed in this life as a human. I’m just a part of a Higher Self (who has lend his consciousness to many life forms at once), and he himself is borrowing his consciousness from another, even higher (and more abstract) entity, ad infinitum, until you go so high up in the hierarchy, that you realize that everything is ONE. In other words, in some sense, I’m a biological robot, that Heva has lend me some consciousness so he can operate it (and Heva is the same too, for the higher entity that operates him, ad infinitum).

So, because “as above, so below”, I truly believe that the only way for us to create true AI in this world, is to simply lend our consciousness to our machines. The tech for that is nearly here, we already have brain implants that can operate robotic arms, for example. I don’t think we’re more than 20-30 years away from a true breakthrough on that front, that could let us operate machines with very little effort.

The big innovation in this idea is not that a human can sit on a couch and operate wirelessly a robot-janitor. The big idea is that one human, could operate 5 to 20 robots at the same time! This could create smart machines in our factories (or in asteroids, mining) by the billions, which could absolutely explode the amount of items produced, leading in an exploding growth (economic, scientific, social).

5 to 20 robots at the same time, you say? Well, look. Sure, it’s a myth that humans only use 10% of their brains. However, when a human, let’s say, sweeps the floor, he/she doesn’t use that much brain power. In fact, the most processing power is used to use the body, not to actually take logical decisions (e.g. sweep here, but not here). That part, is taken care by the robotic body and its firmware (aka instinct), which means that if, for example, we need 30% of brain power to sweep the floor with our own body, we might need only a 5% to do the same thing with a robotic body. In other words, we outsource the most laborious part to the machine, and we only lend the machine just as much smartness as it needs to operate.

I know that this article will be laughed at from the people who are purely scientific, after reading these “spiritual nonsense” above, however, you can’t argue that there’s some logic on these AI implementation suggestions that could work, and that could revolutionize our lives. Interestingly, I haven’t seen any robotic/AI projects using that idea yet. There are a few sci-fi books that have touched on the idea of operating a machine via brainwaves, but they haven’t gotten through to the significance of it (otherwise they would have described a world much different than the rather traditional one they try to sell us), and they haven’t even gotten the implementation right either. “Avatar” came close, but its implementation is too literal and limiting (one human becomes one alien, 1-to-1 exchange).

An FAQ on why I base my AI theory on “as above so below”:

Q: If all consciousness in all living things is borrowed, why aren’t we all smarter?

A: The amount of consciousness in a species is limited by the brain that species carries. Just like if you could lend your consciousness to a cleaning robot (that has a firmware specifically made for cleaning jobs, plus a fixed processing power), you would only be able to lend it as much as it can carry. Not very much, that is.

Also, these robots could make mistakes, since humans themselves make mistakes, as they now operate using our intelligence. But the mistakes will be limited, because that consciousness would only operate based on the brain’s (CPU/firmware) limitations. Additionally, having stupid machines with no errors, and much smarter ones with few errors, the choice is obvious for what the market would go for.

Q: If Higher Selves exist, why wait for billions of years until humans evolved to be smarter and to have enough capacity for consciousness? Sounds like a waste of time.

A: Anyone who has done a DMT trip or two will tell you that time is an illusion. Time exists only for us, who live in this simulation/holographic universe. Outside of this experience, there is no time. Everything happens at once.

Additionally, having a natural evolution program running is much more efficient, flexible and expanding than creating “fixed” robots. So it’s well worth the wait.

Here, we must also understand that our master reality (the one that we’re borrowing our consciousness from) is also creating our universe. The universe is simulated, and the computing power behind them is their minds. After eons, life evolves, and then their consciousness can take a more active role in that universe.

Q: Why can’t we remember who we truly are?

A: It’s a matter of efficiency, and also it’s about how much consciousness our brains can hold (e.g. humans are more self-aware than flies). To survive in this world, we need an “ego” (both with the Buddhism and Freudian/Jungian meaning). The ego believes that he or she is a human, she’s got a name, a job, a favorite movie, a life here. It’s that ego that keeps us grounded here to “do our job” (for whatever we meant to do). So we create this mask, this fake self, so we can survive here. The ego is also the one that keeps us in life with such perseverance. When under meditation or entheogens we can experience “ego death” (the mask falling off), then we can truly know that we’re everything that is, ever was and ever will be. In that state, we’re just consciousness, pure being. Unfortunately, that also would mean that we can’t operate in this world. So the ego is a safeguard for efficiency to stay grounded in this reality. It’s useful to experience ego death a few times in one’s life, in order to get perspective, but for our everyday life, we need it.

Q: Why our Higher Self hooked himself up and “re-incarnated” itself (aka lend consciousness) in many different life forms?

A: For the same reasons we will do so, when we have the technological capability: work, learning, research, fun, producing food, more fun… And of course, for no reason at all too. Sometimes, things just are. Natural evolution of things. For the same reason every species wants babies, maybe.

After eons of evolution, our own “robots” should be able to lend their consciousness to their own “robots”. It’s a dynamic system that happens automatically in the evolutionary process, until it reaches its most basic levels of consciousness and the simplest of universes (1D universes). Just as you can’t create energy, you can’t create grow consciousness. You can only divide from that. As such, every time a given realm creates their own versions of existence/ simulated universe, by definition these new universes are more rule-based and more rigid than their master’s universe. That’s why “higher dimensions” are seen as ever-changing while on DMT trips (entities and things constantly in 5D flux), while further down the line in the consciousness tree, the universes there have more laws that keep things solid and seemingly unchanging.

Q: Can we be hacked then?

A: YES. In a number of lucid dreams I was installed devices in me, or was “modified” in some way by entities that clearly weren’t human. Others on DMT will tell you the same thing too. Sometimes we’re upgraded by the “good” guys, sometimes by the “bad” guys. Just like in any digital system. What can ya do?

Equally possible is that other individual instances of consciousness can inject themselves into your body and take it over for a while. That should still be considered a hack.

Q: Lucid dreams? So why do we sleep or dream?

A: For two reasons: because both the body needs it (it must recharge in standby mode, while it runs a few daemons to fsck the file system, among others), and because the borrowed consciousness must take a break too.

Let’s assume that you were just driving a car. After a few hours, you would need to take a break — even if the car could still go on for many more hours without filling up with gas. If you are the consciousness that drives, and the car is the robot that is operated by that consciousness, you realize that the consciousness will need a break much earlier than the car itself would need one!

When we dream, our consciousness operates in levels of existence similar to that of our Higher Self, but because our consciousness is still “glued” to the human body, we interpret everything based on our human experiences. Most of these dreams are nothing but programs, designed to teach us things. In a recent dream, I was in a bus, among strangers. Then, out of nowhere, I became lucid. Suddenly, the people in the bus, looked at me not as strangers anymore, but as participants in a dream that they had programmed and played roles for me. But after the curtains fell, and I knew I was dreaming and so I wasn’t getting “no” for an answer, they were quickly answering for me questions like “why we dream”, and what is “the hierarchy of the cosmos”. Most of the time, they speak in symbolism that only I can interpret, but some times they’re very direct.

Some of these entities seen in dreams are “spirit guides”, which probably aren’t very different to what “IT personnel” are for our computers at the office. No wonder why spirit guides can change throughout our lives, why there is more than one of them (although there’s usually a main entity ordering the others around), and why Esther (my spirit guide) once told me: “what do you want again, calling me for second time in this session? I’m on lunch”. LOL. And yes, they can get pissed off too (I made them so a few months ago). But they can also give you some super-important info too (e.g. they showed me that my business Instagram account was under attack, and lo-and-behold, within a week, it happened).

People on DMT can de-couple from their bodies and shoot much further than dreams allow (dreams are controlled stand-by versions of experiencing the other realms, while DMT has no limits of where it can land you). While on DMT/LSD/Shrooms/etc you can become one with your higher self, or with everything, visit other realms etc. However, when you come back to your body/brain, you “forget” most of what you saw, or they suddenly become incomprehensible, because as I said above, your brain has a limited amount of function and capacity.

Q: If these other realms exist, why can’t we see them?

A: Our brain is a filtering mechanism. We live in a parallel existence, just like part of our consciousness inside a robot would experience its existence as separate than that of its human master. Inside the mind of the robot, it doesn’t perceive its external world the same way a human does (even if it has HD cameras, that “sees” the world as we do, the digital processing that ensues FILTERS out many things, because they’re not relevant to its function). Again, people who do low dose shrooms, LSD, or DMT, will start perceiving their own room as looking different (e.g. full of pixels, or full of colors that don’t exist etc), and often, on slightly higher dosage, they will see entities in that room that normally wouldn’t be able to see with their eyes (and yet, most dogs can, according to reports, since their brains have evolved differently). So basically, remove the (brain) filter for a while, and enjoy an upgraded version of reality.

Q: So if our brain can’t see these other realms, why can’t our machines/cameras see them either?

A: Some can. It’s just that seeing further in the light spectrum doesn’t mean that you can also humanly interpret it as a separate realm with its own intelligence. We’re still inside a box (a specific type of simulated universe that is computed by our own collective consciousness) and trying to see outside of it, it has its limitations. The only way to take a peek outside of it, is if you temporarily decouple your consciousness from your body (e.g. via DMT), so you’re not actively participating in the creation of this universe, but you’re free to explore other universes. The problem with DMT is that it can land you anywhere… including “hellish” worlds. It’s not predictable at all, it’s a crapshoot.

Q: Ok, so what’s the point of our life then?

A: The meaning of life is life itself. Or, no meaning at all. You give it one. Or not.

Q: So, is my life insignificant?

A: Yes and no. Yes, if you think yourself in the vast cosmos from the human, cynical point of view (everything is a point of view). And no, because you were “built” for a function, that you do perform, even if you don’t know it (even if you spend your life watching TV all day, you could still perform a specific function in another level of reality without knowing it — universes aren’t independent of each other).

You would only feel “small” after reading all this if you’re a selfish a$$hole. If you embrace that you’re everything instead, then all existential problems vanish.

Q: So why everything exists then?

A: In the high level of things (after you sum up all the hierarchical entities of consciousness), there’s only the ONE. One single consciousness, living inside the Void. Nothing else exists. It has decided that dividing itself in a top-down fashion, creating many parallel worlds and universes and lending its consciousness to its living things, for the sake of experience itself, was the best action to take. In reality, everything is just a dream in the mind of that ONE consciousness. And Everything is as it should be.

Each individual down the hierarchy of existence is differently-built and under fluctuating circumstances, and therefore each of these individuals provide a different point of view on the things it can grasp. It’s that different point of view that the system is after: novelty. Novelty == new experiences for the ONE.

Q: Why is there evil in the world?

A: There is no “good” and “evil”. Everything is just a perspective. For the human race for example, eating babies is a terrible thing. For the Komodo dragons (that routinely eat their babies), it’s just another day.

Also, in order for LOVE to exist (love == unity of all things), FEAR must also exist (fear == division of all things). One can’t exist without the other. And it’s being decided by this ONE consciousness that it’s better to experience SOMETHING, than to experience NOTHING for all eternity.

Q: Does this ONE consciousness answer to prayers?

A: No, it doesn’t give a sh1t, grow a pair. From its point of view, everything that can happen, must happen, so it can be experienced (including very “bad” things). The ONE is an abstract construct, not a person. It’s you, and me, and everything else as pure BEING.

Artificial Intelligence, Siri, Voice Actions

Artificial Intelligence. Or just a wanna-be service. It doesn’t matter really, Siri and Voice Actions are running the show today when it comes to “intelligent” assistants. But Siri is bound to be left behind, eventually.

Of course I had the suspicion for a while now, but after checking out the WWDC keynote today, it becomes clear to me that Apple does not aggregate the Internet to present information in a tight manner, but it rather bonds deals with big web sites. Then, they have to provide Apple with an API that provides access to specific data, and present that information in a beautiful UI.

As cool as this looks, it doesn’t scale. Sure, for year 2012 it might be good-enough (since we’re coming from an era that had nothing like that before), but by year 2020 this would be the wrong way to do AI. Apple can not possibly hard-code support for this decades’ 500 new major web sites. It’s too much work, error-prone, and “500” is still a small number of the web sites that people would be interested in. Apple will hit a wall with this type of engineering.

Even the opposite doesn’t work (every web site adding API hooks specifically for Siri, on a special URL with an API key). Do you remember, Mac OS’ Sherlock (and its third party equivalent “Watson”)? Old Mac OS users should remember this app! This would be the same thing! But also remember how often these APIs were broken on the various plugins consisting these apps, eventually leading to their demise. They didn’t scale because people running the various sites didn’t care to keep compatibility!

The only way to go forward with an AI that scales, is for the AI itself to aggregate the internet by hooking itself into a search engine in a more specialized way, find the right info, and present it to you. And because Google already owns and operates a “smart” search engine, is in the unique position to prove a much more useful service later on. Sure, it looks bad now UI-wise, and it’s way less attractive or interesting (it doesn’t make me wanna use Voice Actions), but it’s got more potential than Siri in the long run.

If any of the two reach the Singularity, it would be Google’s solution. I called it. Unless Apple acquires a bunch of startups, plus acquiring or licensing Bing. Then it can get interesting.

MAGIX Movie Edit Pro MX Premium 18

I tested the new version of MAGIX Movie Edit Pro MX Premium recently, version 18, and I was positively surprised. The application has played a good catch-up with Vegas Platinum and Premiere Elements.

The biggest new features in the new version are its stereo 3D support, full 24p support, and it has accelerated nVidia/AMD support for h.264 decoding & plugins. Three major third party video app developers added support for MAGIX too, proDAD VitaScene 2 (special effects), NewBlueFX Light Blends (transitions), and Red Giant Magic Bullet Quick Looks (color grading).

Other features include screen capturing, DVD/BD burning with various templates, multi-track, multi-cam, fast image stabilization, primary & secondary color correction, masks, good advanced modes in its exporting dialogs, and even Twixtor-like slow motion.

The app is not perfect though. Non-accelerated h.264 is not as fast as Sony Vegas’ plain decoding is, so you better start saving for an nVidia card. Then, the color correction plugins are not very versatile. Finally, the project properties dialog is missing some important setup options, compared to Vegas’.

On the other hand though, this app can do other things that Vegas can’t do, including 64bit support. Stability and overall speed were good while testing the app, although usability could use some touch-up.

Overall, I’d say that this version puts MAGIX on the top-3 of the consumer video editor market. It just needs more setup options in terms of flexibility, rather than more brand new advanced features (e.g. I’d like to tell my editor that my footage is interlaced, etc). But it seems to be getting there!

Rating: 8/10

A Look at Premiere Elements 10

Adobe sent a free copy of their brand new Premiere Elements 10 video editor for a review, and I took up on the challenge to see what’s new. Premiere Elements and Vegas Platinum are undoubtedly the best two sub-$100 consumer-based video editing suites that could actually deliver Pro features when someone uses them to their full extend. They both can do 24p, time stretching, burn DVDs and AVCHDs, deal with color correction etc.

For the 10th version, Adobe added some additional color correction plugins: Auto Tone and Vibrance, and a three-way color corrector. You can independently adjust color in highlights, shadows, and midtones, while the “auto tone” plugin is actually pretty accurate, even if fully automatic.

Another major feature is 64bit support, but this only works on Windows 7 (Vista/XP won’t work with the 64bit version). There’s also “AVCHD exporting” now, which lets you export M2T or MP4 files with customization support. That’s the only exporting option I found that had acceptable parameters to tweak (e.g. VBR support), and use for personal viewing/YouTube/Vimeo. I put together an exporting tutorial here.

Other new features include AVCHD burning (burned on DVD discs) in addition to plain DVD and Blu-Ray burning, automatic Facebook and Youtube exporting (which unfortunately exports in the wrong frame rate and doesn’t let you edit it), you can tag photos using your Facebook Friends list, turn photos into movies, and photo tagging. And of course, even more kits DVD templates & multimedia files for use in your video.

The app seemed more stable than the previous version, but it took a good while to load. The “auto-analyzer” feature Adobe added supposedly for photos-only is super-slow though even when no pictures exist on the project bin, and it’s best to be turned off. Also, the app is not particularly “smart”. It loads effects on clips all by itself by default (e.g. motion, opacity etc), and this has a speed repercussion. On a slower PC for example, this was the difference between super-smooth and dropping-frames on 1080/60i HDV footage. The playback speed also dropped to the floor even if I changed no options on an active plugin (e.g. leave Three-Way Color Corrector loaded as-is without changes, but active). There’s definitely room for improvement on that front.

Additionally, Adobe added DV PF24 pulldown removal support, but not HDV one. I tried to enable the “DV 24p pulldown removal” option on a PF24 Canon HF11 clip, but the checkbox wouldn’t become enabled. Honestly, adding DV pulldown removal but not HDV/AVCHD one, is pretty lame in this day and age. Not to mention that in some dialogs, the app would change my typed 23.976 to 23.98, which could force resample, and enable ghosting.

Features missing is the Mercury Engine, as found on CS5.5, that could make h.264 playback even faster. There’s also no way to tweak the project settings as you can do with Vegas. For example, there is no AVCHD 1080/24p option with 5.1 audio setting. The user is forced to use the dSLR 1080/24p preset, and lose his 5.1 sound settings! Adobe is trying to make everything in the UI be a pre-selectable option, but some of these options simply need to be editable rather than just pre-selections. See, not all use cases are covered by pre-selections. Let the user decide how to mash-up the project properties to suit the myriad formats of camera footage that exist in the world.

And that’s the biggest pitfall of Premiere Elements, and I don’t personally see any way out of this unless its product manager really changes direction. Premiere Elements is more of a “here are 30 choices, select one” kind of app, while Vegas is more like “here are 10 choices, or customize it yourself” kind of app. It’s that difference that makes Vegas more suitable for both home & serious projects on a low budget, while PE remains suitable for home projects but not so much for more complex projects.

Between the two, Vegas is less clunky and confusing after the initial shock, while Premiere remains painful to use even after you learned its tricks. Then again PE now has a 64bit version and a Mac version. On the other hand, Vegas can do 3D editing and has more flexibility. At the very end, for family users it might just be a question of price at the very end. Checking prices today, Vegas Platinum 11 costs $63 on Amazon, while Premiere Elements costs $91.

The Revolution of Skype

One of the top-3 features I check when I buy new laptops or smartphones is the availability and quality of their webcam. Every time I comment about it people keep telling me that they never really use their webcam and that I overreact about them.

Not so. I actually use the feature extensively. When I’m in the US, I Skype with my mom, and cousins in Greece & Germany extensively. Now that I’m in Greece, we actually still use this small Ubuntu netbook I had given my mom to video-chat with my cousins and their small kids who live on the other side of Greece. My uncle and aunt, who live close to us, visit us a few times a week, and then we call them in order for them to see their grand kids and chat.

Each session takes from 30 mins to 45 mins, and while they have a free call to each other via Vodafone, they much prefer the video chat, by far. It’s so nice to see them happy, and embrace the wonders of technology (especially since my cousins’ webcam is of higher quality and they’re crystal clear in our screen). My aunt is now considering of learning to use a computer and install an internet connection, just for the video chat (just like my mom did last year at the age of 55). I suggested the iPad 2. When my mom’s Linux netbook goes kaput, I will get her an iPad too. It’s a much more suitable option for her kind of usage (light browsing, email, facebook, video-chat).

I was considering myself the new Macbook Air, to replace my DELL ultra-portable laptop which has touchpad driver problems, but while all new Apple products got an HD webcam this year, the new Macbook Air didn’t (the quality difference between 720p HD and VGA Apple webcams was demonstrated on youtube and was significant). Since a webcam is a vital feature for me, I won’t get a Macbook Air. At least not this year’s model.

The need for a 3D lighting plugin

On my color grading adventures on Sony Vegas I often felt limited by the lack of a re-lighting plugin. Of course, the rule of thumb is that we should always have near-perfect lighting while we’re shooting, but in real life this is almost not possible, especially for most of us DIY enthusiasts who have no major clue about lighting, or we’re using cheap $100 equipment.

I would love to have a plugin that let’s you add virtual lights on your scene, in the 3D space (like Vegas’ 3D track support). We should be able to adjust the number of lights, their strength, the focus, the distance, and with the help of masking, even adding lights behind a subject. Similarly to lights, we should be able to add shadows, to darken parts of the screen.

Of course I don’t expect this to be as good as real lighting, but I’m willing to use it if needed. Right now I’m Vegas’ “soft spotlight” method via the Bump Map plugin, which is not ideal, but some of my footage required it.

Now that the rumor has it that Red Giant Software won’t be supporting Magic Bullet for Vegas anymore, another plugin I’d like to see in Vegas is a AAV ColorLab clone. Unfortunately this freeware plugin is too buggy and not developed anymore, so in order to easier emulate lots of movie looks, Sony themselves must recreate it. That plugin allows you to change saturation, hue and lightness on specific colors only (without the pain in the ass UI that’s called “Secondary Color Corrector”). One added feature I’d like to see in it is the shrinking of the range of these colors. For example, anything that’s around the blue/cyan values should be able to crash towards the ultimate blue (kind of like removing the number of blueish colors and replacing them with the same true blue). This can help to easily recreate old cameras. It’s a stylistic thing, as color grading really is.

News tickers suck as widgets

The latest thing I strongly feel that I have to pick on is Android widgets. News-related widgets to be more precise. Like the Facebook, Twitter, and NYTimes widgets. They all suck. They’re completely and utterly useless.

Instead of showing actual text of updates/headlines/news, all there should have been there were pretty icons & numbers. For example:
– Facebook: A 1×1 widget showing the number of unread notifications & messages.
– Twitter: A 1×1 widget showing the number of unread replies and messages. I wrote about it last year, nothing came out of it.
– NYTimes: A 4×1 widget showing 10 “news category” icons (5×2 small icons at the same style as Android’s “power control” widget), with each having a number on them if there are unread articles under the said category. NYTimes has over 10 categories, so these can be customized by the user to fit in the matrix.

See, in their current incarnations, these widgets just fit ONE single news item to show each time. Who in their right mind turns ON their device just to read a single item? And then the user must hit a very thin “next” arrow icon to get to the next item. Difficult to hit properly, and honestly, why bother? If all you want is to read all news items, just open the freaking app! It will be MUCH FASTER to just gaze through a vertical screen full of info, rather than having a 2-3 line overview of a single item each time.

Having icons and numbers instead (the way NewsRob does it), it is more visual, and so it’s way faster for the brain to find the right information. For example, by knowing that there are 6 unread “sports” news items at NYTimes, the user immediately can make a calculated decision as to if he must read them now, gaze through them now or later, or he should let more of these articles accumulate before he sits down and goes through them. In this situation, the widget helps him make decisions about how to use the main app, and when, and how much work that would be to do so approximately. The way things are now, we just click “next”, “next” “next”, often spending time reading headlines we don’t always care about. The information coming from Facebook or NYTimes is so much, that trying to fit it on a 4×1 or 4×2 widget is utterly ridiculous. Heck, there’s a reason why these kinds of apps are even more successful on a tablet than on a phone: too much information that requires more resolution. So their current widgets use a broken widget design. Instead, in this case, the widget must simply help us decide if it’s time to open the main app or not.

Is all this really too difficult to comprehend? Where are the usability designers in this day and age? Are they hiding somewhere? Or are the managers don’t listen to anyone anymore?

iOS Apps: CamLock & AutoPainter


My friend Alastair (of Glidetrack fame) yesterday introduced me to CamLock, an interesting iOS camera app that lets you control various aspects of the camera when shooting video.

Of course, the most important feature for me, is exposure lock. It’s the only way to shoot a video, under a somewhat controlled lighting, where brightness doesn’t jump left and right — making it look very unprofessional. But apart this basic feature (that all phone should support by default), the app comes with other features too, like locking white balance, locking focus, setting exposure by tapping somewhere in the scene, and using a grid and a bubble indicator to keep things straight while shooting.

Unfortunately, there is a bug with iOS 4.3.x and exposure doesn’t stay locked if you try to lock it before you start recording. Thankfully, it can be locked after you start recording, so the problem is not really that huge.

On the iPod Touch 4th Gen that I tested the app, the focus lock does not work. I’m thinking that the camera model on the iPod and iPad might not support that. Here’s a video I shot showing the exposure lock:


The other app I got across today was AutoPainter. You can make any of your pictures look like Aquarell, Benson, Cezanne or Van Gogh. It’s fully automatic, and it works surprisingly well. There is also a way to direct the program to pay extra attention to some objects in the picture, but this is only optional — depends on the look you’re after. I used this on my iPod Touch, but I bet its “AutoPainter HD” version looks great on the iPad. There’s a more full-featured version for the PC & Mac too (samples).

Amazon’s Cloud Player

Tonight Amazon announced its Cloud Player for Web & Android. They give 5 GB for free, upgraded to 20 GB for one year, if you buy any MP3 album. Any new album you buy and you add it to the Cloud service won’t eat out your storage allowance. All this is not a bad idea, I’m myself a supporter of streaming, when this is done right. The problem is that Amazon doesn’t do this right. RDIO, MOG, ThumbPlay, Real, Napster, Spotify, do. The problems I have with the Amazon deal are:

1. When you purchase a new album you can only add it EITHER on your cloud drive, or download it on your PC. If you must download to your PC (as I do), and later want to have it on your cloud drive as well, you have to manually upload it! This is stupid. Sure it makes sense from legal point of view (so Amazon doesn’t go on record selling you two copies), but it sucks from the user’s point of view. There is no other reason, apart the legalities, why Amazon couldn’t automatically link your purchase to your cloud account. It already does not use your storage to store new purchases, so the technical part of just “linking” music with accounts is obviously in place.

2. I have to upload manually my previously-bought Amazon music. I don’t see why Amazon shouldn’t automatically link my account with these past purchases too — apart the legal shite again.

3. There is no “offline” mode. If someone uses wifi or 3G to listen to music from the cloud he will find his smartphone’s battery go down within 2-3 hours. RDIO/MOG/etc offer the ability to sync up to about ~4 GB of your collection, and access it “offline”. Their servers create an encrypted blob of data that only their player can playback. This way you can listen everything from the cloud when you’re using a wall socket, and the checked items directly from the flash drive when you’re mobile. Perfect for travelers.

4. Ultimately, this service is not good enough. RDIO/MOG is a better deal at $10 per month with an ~unlimited music selection (not just your own library). Given that I spend about $80 a month on music (mostly from Amazon these days, since they’re considerably cheaper than iTunes), if I wanted to go subscription I’d just go with MOG or RDIO.

Sitting down and manually uploading gigabytes of files to Amazon’s servers is one thing I won’t do though. No way.

From the HD digicam to HDTV, losslessly

For most people, the way they watch their snapped HD video is with either these ways:
– Copy files to the computer, edit as a single video, export to a DVD or MOV/MP4. This is the most popular method, and the one I’d recommend to most people.
– Buy a mini-HDMI adapter, connect the camera to the TV directly.
– Wirelessly transmit the video via DLNA/UPnP/Airplay to a playback device.

Each of these ways has its advantages and disadvantages. In the first case, exporting to a delivery codec for general viewing, is “lossy”, for example. Using the camera via an HDMI adapter doesn’t scale financially, since these files are huge and new SD cards would need to be purchased regularly once they’re filled up. As for the third option, it’s technically complex and few can figure it out, or have a solid-enough WiFi network to send up to 50 mbps of data over the air (few people have their living room wired), or often the formats don’t work on the targeted device.

So here’s a fourth, alternative way, for those who want lossless HD video, from their camera to TV, using a playback device, like the Sony PS3, XBOX360, Roku, GoogleTV. Some cameras that already record in MP4 (h.264/AAC), or playback devices that support many formats, do not require any additional processing on the video files, but often, this combo is not possible (e.g. files from a Canon HD digicam and Roku XD|S’ “USB Channel” playback app don’t work together). This article is trying to tackle such cases. The video will remain untouched, only the audio would be re-encoded in high bitrate, and only if that’s needed.

So, for this tutorial, you will need an h.264 (not MJPEG) HD digital camera or digirecorder, and some rudimentary knowledge of how to use the command line. If you have a Canon camera, you should use its built-in editing function to cut out the parts of videos that are not worth keeping (e.g. shaky scenes etc). Don’t keep footage that is not presentable. Other cameras might have the same in-camera ability, check with your camera’s manual.

Install this ffmpeg build from here, and the faac encoder archive from here. Create a folder called “ffmpeg” in your user’s folder, it would look something like this: C:\Users\YOUR-USERNAME\ffmpeg\ (the exact path depends on the version of the Windows OS you’re using). From inside these two compressed archives you downloaded, drag in the ffmpeg folder the ffmpeg.exe file, and the libfaac-0.dll file (note: you will need the 7-Zip utility to uncompress these archives). Then rename libfaac-0.dll to libfaac.dll (if you don’t see file suffixes, e.g. .dll, .7z, etc, you can enable that feature on your Windows Explorer’s “Options” dialog). Finally, inside that ffmpeg folder create two new folders, named “original”, and “rewrapped”. Using an SD reader (it’s faster than connecting your camera via USB), copy your camera files inside the “original” folder.

If your files’ suffix is .mov or .mp4, open Quicktime, load any one of your camera files, and click “Window”, and then “Show Movie Inspector”. Check the format, and depending what kind of audio it says it’s got, you’d need to follow a different ffmpeg script, as shown below. If your files are .mts use case #4 (change .mts to .m2ts or .avi if your h.264 camera shoots as such, e.g. older AVCHD cams, older Flip cams). So, open the “Command Prompt” Windows application, navigate to the ffmpeg folder (that’s where the rudimentary command-line knowledge I mentioned above is required), then copy/paste the right script that matches your case, and run it.

1. MOV (h.264/PCM) to MP4 (h.264/AAC) // Canon, new Nikon cams

ffmpeg -i original\ -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\movie.mp4

2. MOV (h.264/AAC) to MP4 (h.264/AAC) // Flip, Kodak, iPhone/iPod/iPad

ffmpeg -i original\ -f mp4 -vcodec copy -acodec copy rewrapped\movie.mp4

3. MP4 (h.264/PCM/AC3/mono AAC) to MP4 (h.264/AAC) // Optional for Samsung cams with AAC mono audio

ffmpeg -i original\movie.mp4 -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\movie.mp4

4. MTS (h.264/AC3) to MP4 (h.264/AAC) // AVCHD-Lite Sony & Panasonic digicams

ffmpeg -i original\movie.mts -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\movie.mp4

Obviously, manually change the two filenames “” each time you move to a different file (e.g. an original file would become 2011-03-14-343.mp4). If you have lots of camera files, and changing the filename manually is way too much work, you can create a “batch” script. Open notepad or another text editor, and depending which of the cases above you need, copy/paste the appropriate script (not the “Case ##:” line).

Case #1:
for %%a in ("original\*.mov") do ffmpeg -i %%a -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\%%~na.mp4

Case #2:
for %%a in ("original\*.mov") do ffmpeg -i %%a -f mp4 -vcodec copy -acodec copy "rewrapped\%%~na.mp4"

Case #3:
for %%a in ("original\*.mp4") do ffmpeg -i %%a -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\%%~na.mp4

Case #4 (change .mts to .m2ts or .avi if your h.264 camera shoots as such, e.g. older AVCHD cams, older Flip cams):
for %%a in ("original\*.mts") do ffmpeg -i %%a -f mp4 -vcodec copy -acodec libfaac -ab 256000 -ac 2 rewrapped\%%~na.mp4

Save the script for your case as rewrap.bat on the same folder as your ffmpeg.exe file, and run it. It will only take a few moments to losslessly rewrap your video to the more standard MP4 format that can be played with most TV devices. The only device that needs its video re-encoded is the original AppleTV, which has a limitation of 720/24p at 5 mbps (Simple Profile). Every other modern TV device should be able to playback these rewrapped files without a problem. Once you have confirmed that your device can play these files, you can delete the original camera files to make space in your PC (these new MP4 files are as good for future editing, since the video was not re-encoded, and the audio was not always re-encoded — and if it was, it was with high-enough bitrate).

If you would like to join all these created MP4 files under a single MP4 file for easier viewing, you can use MP4box. Unzip mp4box.exe and any DLL file it comes with on your \ffmpeg\rewrapped\ folder. Navigate there with a Command Prompt window and run:
MP4Box -cat 1.mp4 -cat 2.mp4 -cat 3.mp4 output.mp4
Add as many mp4 files you want (preferably in the order they were shot as), and of course use the right mp4 filenames. If it complains about it, you also need to hunt for the MSVCR100.dll file at Microsoft’s site, and when you find it, you can copy it on the same folder as mp4box.exe. At the end, to fix some mp4box container bugs, run:
..\ffmpeg -i output.mp4 -vcodec copy -acodec copy -y final.mp4
This final.mp4 file is the one you can use to watch all your clips as one. If you’re copying it into a FAT32 system to view, then make sure the file is not bigger than 4 GB, or it will fail (FAT32 limitation).

If you’re porting these scripts to a Linux/Unix OS, you must either compile your own ffmpeg with libfaac support, or download a binary package compiled as such, or you will need a pre-built libfaac library for an existing ffmpeg version (that must itself be built with dynamic loading). Finally, make sure the script’s backward slashes are changed to forward.