Archive for the ‘Hardware’ Category (feed)

The future of CubeSats is co-operation

As you may already know, I have the most interesting dreams, hehe…

Apart from seeing weird alien entities during my nap time, I was also shown how the CubeSat idea can be properly commercialized. Beat that, MIT (or DMT).

So basically, I was shown a bunch of 3U CubeSats (around 10 or 12 of them), held together by some sort of string, forming a web. At the edges of the web, there were semi-large solar panels and antennas, while in the middle of the web, there was a propulsion engine, not larger than a 3U CubeSat itself.

Right now, all CubeSats are released in the wild on their own, with no propulsion (sometimes they end up facing the wrong way), terrible power abilities, and even more terrible communication (FM among others!!!). These satellites usually die within 3-5 months, quickly burning in the atmosphere. On top of that, they’re usually get released as secondary payload in LEO, while CubeSats are benefited in higher SSO orbit.

Here’s the business idea behind of what I saw:

– You let customers buy one of the CubeSats and customize it out of an array of most-popular components (third party components that pass evaluation can be accepted — that costs extra).

– The CubeSats run Android, so writing drivers for it, updating them over the air, or even completely erase them to their default status can be done. Each of the 12 CubeSats runs a slightly different version of the OS, and has different hardware — depending on the customer needs.

– The customer can access their CubeSat via a secure backend on the manufacturer’s web site. Even firmware updates can be performed, not just plain updates or downlink data.

– Because of the shared propulsion, the constellation web can be in SSO for up to 5 years.

– 1 year of backend support is included in the overall price, but after that time, owners can continue using it for an additional fee, or lease or sell the rights to their CubeSat to another commercial entity, getting back some of that invested value.

– Even if 1 CubeSat goes bad, the others continue to work, since they’re independent of each other. Triple redundancy system in case of shorting. To avoid over-usage of power due to faulty hardware or software (that could run down the whole system), a pre-agreed specific amount is allocated to each CubeSat daily.

– Eventually, a more complex system could be developed, under agreement with all the responsible parties, to have CubeSats share information with their neighbor CubeSats (either an internal wired network, or Bluetooth — whatever proves more secure and fast). For example, if there’s a hardware ability one CubeSat in the web has, but the others don’t, and one of the other CubeSats needs it, they could ask for its service — for the right price.

– Instead of dispensing the CubeSats one by one, the web is a single machine, about 2/3s the size of a dishwasher. The CubeSats have very specific allowed weight in their specification, so overall, while the volume is medium size, the overall weight doesn’t have to be more than 100 kg. That easily fits on the payload of small, inexpensive rockets, like the upcoming RocketLab Electron, which costs just $4.9 million per launch. Falcon 9 becomes cheaper only if it could launch 13 of these webs at once. While it can very easily lift their weight, it might not have the volume required (the Falcon9 fairing is rather small at 3.2m).

– This comes overall to about $600,000 per CubeSat overall (with a rather normal configuration).

The current 3U CubeSats cost anywhere between $20k and $50k to make, plus another $200k or so to launch. Overall, sure, $600k is more than the current going price, but with the web idea you get enough power, communication that doesn’t suck, propulsion, and an extended life — plus the prospect of actually making money out of them by leasing them or selling them. A lot of the revenue will come after the launch, as a service/marketplace business.

In a sense, this business idea is the equivalent of a shared hosting server service, which has revolutionized the way servers work, and has democratized people’s ability to run code or servers online. PlanetLabs is doing something similar by leasing “time” on their CubeSats, but by releasing them one by one, they fall on the stated shortcomings.

For all of this to become true, the CubeSats themselves would need an overhaul of how customizable their modularity is, and easy access to the latest mobile hardware. Overall, we’re probably 2-3 years away from such an idea getting even started to materialize, and possibly 5 years away from becoming reality. I haven’t seen anyone else suggested it, so, here I am. Thank my weird dreams.

“Artificial” Intelligence is a myth

As I wrote in the past, my first job out of college was to work in an artificial intelligence project. The idea back then among engineers in the field was that, given enough smart programming and a tree-like knowledge database, we would eventually manage to create true artificial intelligence (AI). When I was working on the project, I truly believed in it. It was the coolest thing in my life up to that point! As the years went by, I realized that AI via that model, is nothing but a fantasy. A fantasy circulating in the scientific circles since the late 1960s.

These days, there are a lot of people who have gone obsessed with the idea of the singularity (e.g. Ray Kurzweil), while others are sh1tting their pants about how “disastrous” that could be for the human race (Elon Musk being one of them).

Artificial intelligence has progressed since the 1990s, since now it’s not modeled around programming the knowledge directly into it, but it “learns” via the Internet. Meaning, by analyzing behaviors and actions of internet users, Google can “guess” what’s the most logical action for their AI to take for each given situation (crowd-sourcing the intelligence). Crowd-sourcing is a far smarter way to have your AI “learn”, than the “static” ways of teaching an AI system word-by-word as in the past.

However, even with crowd-sourcing, we will hit a wall in what AI can do for us via that method. It can surely become as “smart” as the Star Trek’s Enterprise computer, which is not very smart. What we really want when we talk about AI, is Mr Data, not the rather daft Enterprise computer. So we need a new model to base the design of our machines. Crowd-sourcing comes close, but it’s an exogenous way of evolution (because it takes decisions based on actions already taken by the people it crowd-sources from), and so it can never evolve into true consciousness. True consciousness means free will, so a crowd-sourcing algorithm can never be “free”, it will always be a slave to the actions it was modeled around for. In other words, that crowd-sourcing AI would always behave “too robotic”.

It took me 20 years to realize why AI hasn’t work so far, and why our current models will never work. The simple reason being: just like energy, you can’t create consciousness from nothing, but you can always divide and share the existing one.

The idea for this came during my first lucid dream almost exactly 2 years ago, when I first “met” my Higher Self (who names himself Heva or Shiva). In my encounter with him, which I described in detail here, I left one bit of information out from that blog post, because I promised him that I won’t tell anyone. I think it’s time to break that promise though, because it’s the right time for that information to be shared. Basically, when I met him, he had a laptop with him, and in its monitor you could see a whole load of live statistics about me: anger, vitality, love, fear etc etc etc. Please don’t laugh on why a “spiritual entity” had a… laptop. You must understand that dreams are symbolic and are modeled around the brain of the dreamer (in this case, a human’s dream, who is only familiar with human tech). So what looked like a laptop to me, from his point of view, was some 5D ultra-supa tech that would look like psychedelic colors, who knows? So anyway, he definitely tried to hide from me that screen. I wasn’t supposed to look at it. Oops.

For a few months, I was angry with that revelation: “Why am I being monitored?”, “why am I not truly free?”, “am I just a sack of nuts & bolts to him?”.

Eventually, during my quest to these mystical topics, I realized what I am: I’m an instance of consciousness, expressed in this life as a human. I’m just a part of a Higher Self (who has lend his consciousness to many life forms at once), and he himself is borrowing his consciousness from another, even higher (and more abstract) entity, ad infinitum, until you go so high up in the hierarchy, that you realize that everything is ONE. In other words, in some sense, I’m a biological robot, that Heva has lend me some consciousness so he can operate it (and Heva is the same too, for the higher entity that operates him, ad infinitum).

So, because “as above, so below”, I truly believe that the only way for us to create true AI in this world, is to simply lend our consciousness to our machines. The tech for that is nearly here, we already have brain implants that can operate robotic arms, for example. I don’t think we’re more than 20-30 years away from a true breakthrough on that front, that could let us operate machines with very little effort.

The big innovation in this idea is not that a human can sit on a couch and operate wirelessly a robot-janitor. The big idea is that one human, could operate 5 to 20 robots at the same time! This could create smart machines in our factories (or in asteroids, mining) by the billions, which could absolutely explode the amount of items produced, leading in an exploding growth (economic, scientific, social).

5 to 20 robots at the same time, you say? Well, look. Sure, it’s a myth that humans only use 10% of their brains. However, when a human, let’s say, sweeps the floor, he/she doesn’t use that much brain power. In fact, the most processing power is used to use the body, not to actually take logical decisions (e.g. sweep here, but not here). That part, is taken care by the robotic body and its firmware (aka instinct), which means that if, for example, we need 30% of brain power to sweep the floor with our own body, we might need only a 5% to do the same thing with a robotic body. In other words, we outsource the most laborious part to the machine, and we only lend the machine just as much smartness as it needs to operate.

I know that this article will be laughed at from the people who are purely scientific, after reading these “spiritual nonsense” above, however, you can’t argue that there’s some logic on these AI implementation suggestions that could work, and that could revolutionize our lives. Interestingly, I haven’t seen any robotic/AI projects using that idea yet. There are a few sci-fi books that have touched on the idea of operating a machine via brainwaves, but they haven’t gotten through to the significance of it (otherwise they would have described a world much different than the rather traditional one they try to sell us), and they haven’t even gotten the implementation right either. “Avatar” came close, but its implementation is too literal and limiting (one human becomes one alien, 1-to-1 exchange).

An FAQ on why I base my AI theory on “as above so below”:

Q: If all consciousness in all living things is borrowed, why aren’t we all smarter?

A: The amount of consciousness in a species is limited by the brain that species carries. Just like if you could lend your consciousness to a cleaning robot (that has a firmware specifically made for cleaning jobs, plus a fixed processing power), you would only be able to lend it as much as it can carry. Not very much, that is.

Also, these robots could make mistakes, since humans themselves make mistakes, as they now operate using our intelligence. But the mistakes will be limited, because that consciousness would only operate based on the brain’s (CPU/firmware) limitations. Additionally, having stupid machines with no errors, and much smarter ones with few errors, the choice is obvious for what the market would go for.

Q: If Higher Selves exist, why wait for billions of years until humans evolved to be smarter and to have enough capacity for consciousness? Sounds like a waste of time.

A: Anyone who has done a DMT trip or two will tell you that time is an illusion. Time exists only for us, who live in this simulation/holographic universe. Outside of this experience, there is no time. Everything happens at once.

Additionally, having a natural evolution program running is much more efficient, flexible and expanding than creating “fixed” robots. So it’s well worth the wait.

Here, we must also understand that our master reality (the one that we’re borrowing our consciousness from) is also creating our universe. The universe is simulated, and the computing power behind them is their minds. After eons, life evolves, and then their consciousness can take a more active role in that universe.

Q: Why can’t we remember who we truly are?

A: It’s a matter of efficiency, and also it’s about how much consciousness our brains can hold (e.g. humans are more self-aware than flies). To survive in this world, we need an “ego” (both with the Buddhism and Freudian/Jungian meaning). The ego believes that he or she is a human, she’s got a name, a job, a favorite movie, a life here. It’s that ego that keeps us grounded here to “do our job” (for whatever we meant to do). So we create this mask, this fake self, so we can survive here. The ego is also the one that keeps us in life with such perseverance. When under meditation or entheogens we can experience “ego death” (the mask falling off), then we can truly know that we’re everything that is, ever was and ever will be. In that state, we’re just consciousness, pure being. Unfortunately, that also would mean that we can’t operate in this world. So the ego is a safeguard for efficiency to stay grounded in this reality. It’s useful to experience ego death a few times in one’s life, in order to get perspective, but for our everyday life, we need it.

Q: Why our Higher Self hooked himself up and “re-incarnated” itself (aka lend consciousness) in many different life forms?

A: For the same reasons we will do so, when we have the technological capability: work, learning, research, fun, producing food, more fun… And of course, for no reason at all too. Sometimes, things just are. Natural evolution of things. For the same reason every species wants babies, maybe.

After eons of evolution, our own “robots” should be able to lend their consciousness to their own “robots”. It’s a dynamic system that happens automatically in the evolutionary process, until it reaches its most basic levels of consciousness and the simplest of universes (1D universes). Just as you can’t create energy, you can’t create grow consciousness. You can only divide from that. As such, every time a given realm creates their own versions of existence/ simulated universe, by definition these new universes are more rule-based and more rigid than their master’s universe. That’s why “higher dimensions” are seen as ever-changing while on DMT trips (entities and things constantly in 5D flux), while further down the line in the consciousness tree, the universes there have more laws that keep things solid and seemingly unchanging.

Q: Can we be hacked then?

A: YES. In a number of lucid dreams I was installed devices in me, or was “modified” in some way by entities that clearly weren’t human. Others on DMT will tell you the same thing too. Sometimes we’re upgraded by the “good” guys, sometimes by the “bad” guys. Just like in any digital system. What can ya do?

Equally possible is that other individual instances of consciousness can inject themselves into your body and take it over for a while. That should still be considered a hack.

Q: Lucid dreams? So why do we sleep or dream?

A: For two reasons: because both the body needs it (it must recharge in standby mode, while it runs a few daemons to fsck the file system, among others), and because the borrowed consciousness must take a break too.

Let’s assume that you were just driving a car. After a few hours, you would need to take a break — even if the car could still go on for many more hours without filling up with gas. If you are the consciousness that drives, and the car is the robot that is operated by that consciousness, you realize that the consciousness will need a break much earlier than the car itself would need one!

When we dream, our consciousness operates in levels of existence similar to that of our Higher Self, but because our consciousness is still “glued” to the human body, we interpret everything based on our human experiences. Most of these dreams are nothing but programs, designed to teach us things. In a recent dream, I was in a bus, among strangers. Then, out of nowhere, I became lucid. Suddenly, the people in the bus, looked at me not as strangers anymore, but as participants in a dream that they had programmed and played roles for me. But after the curtains fell, and I knew I was dreaming and so I wasn’t getting “no” for an answer, they were quickly answering for me questions like “why we dream”, and what is “the hierarchy of the cosmos”. Most of the time, they speak in symbolism that only I can interpret, but some times they’re very direct.

Some of these entities seen in dreams are “spirit guides”, which probably aren’t very different to what “IT personnel” are for our computers at the office. No wonder why spirit guides can change throughout our lives, why there is more than one of them (although there’s usually a main entity ordering the others around), and why Esther (my spirit guide) once told me: “what do you want again, calling me for second time in this session? I’m on lunch”. LOL. And yes, they can get pissed off too (I made them so a few months ago). But they can also give you some super-important info too (e.g. they showed me that my business Instagram account was under attack, and lo-and-behold, within a week, it happened).

People on DMT can de-couple from their bodies and shoot much further than dreams allow (dreams are controlled stand-by versions of experiencing the other realms, while DMT has no limits of where it can land you). While on DMT/LSD/Shrooms/etc you can become one with your higher self, or with everything, visit other realms etc. However, when you come back to your body/brain, you “forget” most of what you saw, or they suddenly become incomprehensible, because as I said above, your brain has a limited amount of function and capacity.

Q: If these other realms exist, why can’t we see them?

A: Our brain is a filtering mechanism. We live in a parallel existence, just like part of our consciousness inside a robot would experience its existence as separate than that of its human master. Inside the mind of the robot, it doesn’t perceive its external world the same way a human does (even if it has HD cameras, that “sees” the world as we do, the digital processing that ensues FILTERS out many things, because they’re not relevant to its function). Again, people who do low dose shrooms, LSD, or DMT, will start perceiving their own room as looking different (e.g. full of pixels, or full of colors that don’t exist etc), and often, on slightly higher dosage, they will see entities in that room that normally wouldn’t be able to see with their eyes (and yet, most dogs can, according to reports, since their brains have evolved differently). So basically, remove the (brain) filter for a while, and enjoy an upgraded version of reality.

Q: So if our brain can’t see these other realms, why can’t our machines/cameras see them either?

A: Some can. It’s just that seeing further in the light spectrum doesn’t mean that you can also humanly interpret it as a separate realm with its own intelligence. We’re still inside a box (a specific type of simulated universe that is computed by our own collective consciousness) and trying to see outside of it, it has its limitations. The only way to take a peek outside of it, is if you temporarily decouple your consciousness from your body (e.g. via DMT), so you’re not actively participating in the creation of this universe, but you’re free to explore other universes. The problem with DMT is that it can land you anywhere… including “hellish” worlds. It’s not predictable at all, it’s a crapshoot.

Q: Ok, so what’s the point of our life then?

A: The meaning of life is life itself. Or, no meaning at all. You give it one. Or not.

Q: So, is my life insignificant?

A: Yes and no. Yes, if you think yourself in the vast cosmos from the human, cynical point of view (everything is a point of view). And no, because you were “built” for a function, that you do perform, even if you don’t know it (even if you spend your life watching TV all day, you could still perform a specific function in another level of reality without knowing it — universes aren’t independent of each other).

You would only feel “small” after reading all this if you’re a selfish a$$hole. If you embrace that you’re everything instead, then all existential problems vanish.

Q: So why everything exists then?

A: In the high level of things (after you sum up all the hierarchical entities of consciousness), there’s only the ONE. One single consciousness, living inside the Void. Nothing else exists. It has decided that dividing itself in a top-down fashion, creating many parallel worlds and universes and lending its consciousness to its living things, for the sake of experience itself, was the best action to take. In reality, everything is just a dream in the mind of that ONE consciousness. And Everything is as it should be.

Each individual down the hierarchy of existence is differently-built and under fluctuating circumstances, and therefore each of these individuals provide a different point of view on the things it can grasp. It’s that different point of view that the system is after: novelty. Novelty == new experiences for the ONE.

Q: Why is there evil in the world?

A: There is no “good” and “evil”. Everything is just a perspective. For the human race for example, eating babies is a terrible thing. For the Komodo dragons (that routinely eat their babies), it’s just another day.

Also, in order for LOVE to exist (love == unity of all things), FEAR must also exist (fear == division of all things). One can’t exist without the other. And it’s being decided by this ONE consciousness that it’s better to experience SOMETHING, than to experience NOTHING for all eternity.

Q: Does this ONE consciousness answer to prayers?

A: No, it doesn’t give a sh1t, grow a pair. From its point of view, everything that can happen, must happen, so it can be experienced (including very “bad” things). The ONE is an abstract construct, not a person. It’s you, and me, and everything else as pure BEING.

Canon: a piece of shit company

If you’ve been reading this blog for a few years now, you KNOW how I had been a Canon fan girl for their consumer digicams in terms of video. Their previous digicam non-DSLR cameras were steadily getting better and better video controls, and that was something to cheer for. They were outperforming all other manufacturers by getting the *basics* of video right: exposure compensation, exposure lock, custom low colors, good frame rates at good bitrates, and some models even had manual focus and focus lock.

I was even, unfairly, called biased by certain people, for pushing these Canon digicams. But I’m not biased about hardware, I’m a hard realist. There were definite, true, and important reasons why I’d suggest Canon in the past (if your goal was artistic videography).

Well, the newer Canon cameras, starting last year, weaned off such abilities! FEATURES WERE REMOVED one by one, model by model. We are now at the point where the expensive, high end P&S digicam S110 does not even have exposure compensation/lock. This is obviously done so their more expensive dSLRs sell better, and their camcorder department doesn’t die. It’s an ARTIFICIAL way of keeping business afloat. That’s not what the market wants, it’s what Canon wants. Consider that the video section on the S110 manual WAS REMOVED too. Yup, removed. Where there used to be a whole chapter on video usage in the manual (in EVERY ONE of their P&S models), now there’s *none*.

For all that is worth, I can not suggest Canon to anyone anymore, when it comes to video mode in P&S digicams. What makes it even worse is that the other digicam manufacturers haven’t step up to the challenge to take over what Canon left behind. Most of the cameras from the other manufacturers also miss exposure compensation & lock, or they use fucked up frame rates. When it comes to semi-serious videography with these pocket cameras, they ALL SUCK, even if that was NOT the case 2 years ago!

I mean, they got to the point where they offered 1080/24p and 720/30p at good bitrates last year. What they should have done this year is to keep the old features and push their frame rates to 1080/30p/25p/24p and 720/50p/60p (just like in their dSLR range). I’m not asking for other crazy features here, neither I’m asking for full manual control. But when they go out on purpose and they remove the most basic of controls, exposure compensation and exposure lock, something that has been there since early 2000s, there’s something sinister at work there.

So, what to do? Get a dSLR or micro-thirds camera that happens to have the whole nine yards when it comes to video. Or if you prefer a camcorder, get the ones that cost over $1000 that also come with the whole nine yards. Since you can’t go for a good-enough $200 P&S digicam for video, shell the cash and get something appropriate for over $1000. At least you won’t be ripped off by buying a P&S digicam for $500 and not even get the video features that were present in a $100 Canon digicam just 2 years ago! So my suggestion is, either go all in, or try to find older models, second hand.

I personally still use my older, SX230, which is the BEST small camera for live shows, amazing mic quality on loud shows, and it still has all the other needed video features too. But it’s not the best in terms of other things (e.g. it has a slow lens). The S100 from last year is also good video feature-wise (if you ignore its hardware faults), the last of its range to support all the basic video stuff that are needed to make a video look professional, and not like a piece of shit cellphone video.

And let’s not forget that Canon only announced the full HDMI-out for the 5D MkIII recently after a third party firmware group said that they hacked in that feature. So basically, someone has to squeeze Canon’s balls before they actually offer what their hardware CAN do, but they refuse to put the software behind it to support it. Even if HDMI support might have engineering costs, this is not a case of “software costs” to the video features mentioned above, because the software for the specific features WAS ALREADY THERE. Instead, they’ve been CONSCIOUSLY removing them PROGRESSIVELY. As in, a strategy.

So, fuck you Canon, you are corporate shills and you suck donkey balls.

Update: The S110 manual I had access at the time of the writing did not mention video, but the updated manuals did, and they do mention exposure compensation and locking for the S110. The issues do remain for most of their newest P&S models though.

MobiSlyder test

MobiSlyder is the small brother of the popular Glidetrack slider. It’s meant for cellphones, P&S digicams, digirecorders, and small dSLRs. The MobiSlyder comes with an articulated mount for full flexibility with your phone, a mobile device mount which has a variable size and can fit both small and big phones, a 1/4″ standard mount, and an adhesive & 1/4″ ball mount.

I tested the slider with my Canon SX200 HS digicam, which worked great. The slider was really smooth, it was like pushing a feather! Very nice sliding, especially for that price!

I also tested with my Galaxy Nexus phone, which is a rather big phone at 4.7″ diagonial screen. The phone also had a plastic case which added to the bulk and weight. The mobile device mount was able to fit the Galaxy Nexus fine, but the articulated mount had problems supporting the weight (even after tightening it). The mount would just collapse under the weight occasionally.

Another problem was that the slider is a bit noisy, as you can hear below. I usually don’t care about capturing sound for my projects, so for me this is not a problem. Also, I’m more likely to be using the standard mount with my digicam rather than the phone too.

10 new useless cameras from Canon

Well, either Canon has lost its mind, or they now segmentize their products too much. They announced 10 new P&S cameras recently, and they all have very disturbing video-related features. Removed features, that is.

– Except the SX260 HS, none of the other new models now support exposure compensation and exposure lock in video mode. That was a feature that was standard in all older models. Without these features, videos look like amateur hour.
– All their low-end cameras now do 25p instead of 24p. This is very dubious, because this is not a case of Canon throwing a bone to the Europeans and their PAL system. This is a case of Canon cutting off the “cheap 24p camera” pathway.

Basically, we had 2-3 years of Canon P&S superiority when it comes to video, and now Canon very consciously is removing video features so they can sell more high end products (e.g. dSLRs), or trying to save their failing camcorder division.

As the market stands today, I can’t recommend anymore ANY new P&S for video (from any manufacturer). A dSLR is the way to go for anyone serious about video (with lenses, you’d need a good $1500). I’m personally eager to see the new T4i.

For those who are interested in old P&S stock, Amazon currently sells the Canon A1200 (720/24p @ 22 mbps with exposure comp/lock and color controls) for $79 (last year’s model). Here’s a documentary shot with this little camera, unfortunately not uploaded in HD though.

New Canon cameras at CES

Canon announced today a few new P&S digicams and camcorders. What do these new models mean for video? Apparently, absolutely nothing.

The flagship of the new announcements is the G1 X, a large sensor G-series camera. The only new video-related feature it’s got is its upgraded bitrate: it now uses the same bitrate as in the Canon dSLRs, at around 45 mbps. But there’s still no manual control, or 1080 @ 25/30p and 720 @ 50/60p (in addition to its 1080/24p and 720/30p). Video-wise there’s absolutely no reason to buy this camera compared to the Canon S100 I’m afraid. Sure, it’s got a bit more bitrate, but that extra 20% more bitrate isn’t worth an additional $400 IMHO. Yup, there’s a big sensor in there now, but if you can’t manually control the aperture, and instead we have the camera go automatically to high shutter speeds outdoors (and closing down the aperture), what’s the point of it?

Update: According to this article, the G1 X does not even have exposure compensation for video. It’s one, big, fat, expensive, JOKE. Update 2: DPreview updated their article saying that exposure compensation does work, but only when the camera is in movie mode, and not when you simply click the record button in any of the other modes. This is how it’s supposed to work, but the way they wrote the original article showed that the dpreview guys are in need of a video-specific reviewer…

Regarding the cheaper 520 HS, 310 HS and 110 HS, there are highly disturbing news I must report. Not only exposure compensation + lock is STILL MISSING from these models (remember, up to 2010, Canon P&S digicams did have this feature for video, but then it was removed from most of their new models), but bitrate was also botched down! Where in the past all Canon P&S HD digicams would feature 21 to 24mbps bitrate for 720p, and 35 to 38 mbps for 1080p, now we have TWO of the NEW models (520 HS & 110 HS) offering just 18 mbps for 1080p, and about 12 mbps for 720p. In other words, Canon made their consumer digicams WORSE than they were last year (again, video-wise).

I made quite a few frienemies by evangelizing the Canon P&S digicams over the last 2-3 years, but starting last year and continuing with this year’s models, I can’t suggest these cameras anymore with a straight face. Canon is trying to save their camcorder department by butchering what it was the best P&S video digicams in the market. They had the basics right, but now they aren’t better than other manufacturers. We were going so well in terms of adding video features on digicams in the last few years, and then, not only there’s a stop, but there’s regression too. Sad…

As for their new camcorders, none of these new models offer anything really new, that’s just recycling we see there. The HF G10 from last year at $1500 still remains their best semi-consumer camcorder ever released, but they didn’t update it this year (it would have benefited from a bigger sensor and a full-size hot shoe).

Conclusion: Buy older Canon digicams if you must have a digicam for video, the ones that still have the basics in place. These basics are, I list them again:
– Exposure compensation + lock
– Focus lock
– Custom colors for “flat” look (at least for saturation, contrast, sharpness)
– 720p at over 20 mbps, 1080p at over 35 mbps
– 24p and 30p options

I mean, really, is that too much to ask? I never even mentioned manual control for A/V, or built-in ND filters, or mic-inputs, or any other “crazy” feature. Just the damn basics needed to make a video that doesn’t look like total amateur hour! Even the iPhone can do most of that now via third party apps!

So, which P&S digicam to buy? If you only shoot random family videos, anything will do, but if you want to do art, go for last year’s SX220/SX230 HS which sells at $200 now (1080/24p, 720/30p), or last year’s A1200 which sells for $90 (720/24p). If you have the extra money, you can consider the S100 too at $430 (same video features as SX230, plus ND filter). For camcorders go for the one I mentioned above, the HF G10, the rest are laughable for anything serious (at least from what you would expect from a camcorder compared to a digicam).

But the best advice would be to wait and see what the new T4i dSLR will be able to do in February. From leaks we know for sure it’s going to have the new Digic imaging processor, but if an updated sensor/body comes with it to complement it (which would translate to less rolling shutter, no line-skipping, continuous autofocus), then there’s no reason to get a P&S digicam. Save your money, work extra hours if you have to, and go for the T4i in that case.

I wish I was able to suggest P&S digicams instead, for young people who just start with video (I will be teaching a videography class soon to kid-artists), but these new models don’t allow me to do so. They’ve taken a step back.

Cellphone camera apps

Android and iOS are taking over the world as the devices of choice for most people, and especially on the iOS front, still photography with an iPhone is taking off. The quality is “good-enough” to get something respectable out of it that FlickR audience would appreciate.

For video though, things are not as peachy as for still photography on smartphones. Video requires a few extra features to make it usable for artistic usage. These are:
– iOS: 24p/25p/30p fps selection (the FilmicPro app hacks this, but it’s not the real output of the camera). Low contrast/saturation/sharpness “flat” mode. iOS supports exposure compensation + lock, tap-to-focus and focus lock, and high bitrate.
– Android (most models): 24p/25p/30p fps selection. Higher bitrate (24mbps). Low contrast/saturation/sharpness “flat” mode. Exposure compensation + lock, and focus lock. Some models have tap-to-focus, but not much else.

So between the two, iOS wins so far easily on video by offering more control, but it’s still not perfect. I mean, we’re not even asking for full manual control here, just the basics to be able to get good video out of these phones. iOS only needs a few things to get it right, but the Samsung/Google/HTC/etc camera engineers really need to get a freaking clue about video and select hardware for their devices that support these features.

I know some people will argue that cellphones are not great to use for art, but I disagree. It’s the artist that matters, not the hardware. However, the artist can NOT fully open his/her wings and use the hardware to the fullest if the BASIC FEATURES are missing. For still photography things are easier, since fewer features are needed for people to get a good output. But for video, some extra stuff are needed, that are easily doable with modern hardware, but somehow they’re not taken into account by the engineers or project managers so far.

Canon’s New “Revolutionary” Cinema Camera Underwhelms

The media have been invited today in Hollywood to witness Canon’s game-changer cinema camera. Canon hyped the announcement last month by claiming that they will write Hollywood history. The C300 is a beautiful small-factor camera, but if we are to judge from the Twitter responses of many industry professionals, the camera seriously underwhelmed them.

The C300 has a 4k CMOS sensor, but it only records at 1080p (if 4k capture is available via HD-SDI is not known at this moment). The camera comes with a PL mount version and an EF one, to accompany Canon’s new 4k-resolving lenses. The price is considered high at $20k by many. An unnamed as-of-yet dSLR from the same C-line was also mentioned by Canon in the press release I received, saying that this new dSLR will be able to record at 4k, but by using the (archaic) MJPEG codec.

This really feels like a big joke today. Either Canon has lost the plot, or they don’t know how to put together specs for people to comprehend. They never made clear if the C300 can capture 4k or not, they mentioned nothing about the codec used, and they never mentioned the 4k dSLR at the event!

What really bothers me is that if the C300 does not do 4k for one reason or another, it should have had the ability to shoot 2k. I mean, come on. 2k is the resolution of most cinema projectors, and it’s so close to 1080p resolution-wise, that not a lot more RAM or processing would be required on board. Not pushing this camera for 2k, while they’re trying to get Hollywood on their side, shows how outside of the loop Canon is.

Finally, there’s not even over-cranking support in 1080p mode. Not to mention that I’m not too hot on the mpeg2 codec. In this day and age all video editors have OpenCL/CUDA support for h.264, and a good h.264 codec can deliver 2-3 times better video than mpeg2 at the same bitrate. Just use the right h.264 10bit 422 encoder at 50mbps, which should be leaps and bounds better than mpeg2. But no. We had to go back 10 years. (Continues below…)

In my opinion, Canon has two options with the C300, and two options alone:
1. With a firmware upgrade, allow RAW 4k capture via the HD-SDI port.
2. If this is not technically possible in the current design, drop the price by 50% at $10,000.

Failure to do any of the two will result in a big FAIL for Canon. This is my honest opinion on the matter, no matter how good a visual result this camera can deliver. It’s still 1080p, and Hollywood has moved to 4k. That’s the reality.

There are some who say that this Canon camera goes against the SONY F3, and not the RED. But this is bullshit. First of all, it’s much more expensive than the F3. Secondly, it doesn’t matter what the F3 can do. Canon was all about a HISTORIC moment, this was the camera some Canon execs last year were saying it will kill RED. But in reality, this is just a camera created by a company that rode the high horse without realizing it. All this shows that the video success of the 5D MkII, their first video dSLR, was a happy ACCIDENT, and not a planned visionary feature. Canon has no idea what it’s doing with their video cameras. Either they didn’t ask anyone for input, or they got input from the wrong people (wedding videographers?).

As for the unnamed dSLR, please don’t get me started at the MJPEG joke. Really Canon? MJPEG? In (expected) year 2012?

The Canon T3i remains the best-camera-for-the-buck ever released (at $800). Because let me be clear, the C300 is indeed better than the T3i, but not 25 times better. Not by a long shot.

In other news, RED changed a few specs around on the Scarlet (4k video and with great resolutions/frame-rates combos). Some say it will arrive by December. Price starts under $10k, but it’s realistically expected to go to $13k after adding LCD, lens mounts etc. If that’s true, RED won the battle today. As much I don’t like RED’s vaporware, at least they’re genuine dreamers. Canon seems to be comprised from corporate shills instead, who don’t understand the new market that has emerged in the last few years inside Hollywood and outside of it.

Update: Haha, this is getting better and better. So, there is no 4k recording via the SDI port on the C300, and the codec is actually just crappy 8 bit all the way (SDI & CF). Full specs here. In the meantime, RED is pissing off its EPIC users, since the Scarlet can do most of what the EPIC can, for a fraction of the price. And we should not be forgetting AVID, who also today announced an uninteresting (to me) editing solution. This has been a very interesting day indeed.

Canon S100: best P&S for video right now

The successor to the S95 is here. Canon just released the CMOS-based S100, the first camera with the brand new Digic V chip in it (which hopefully alleviates most of these issues that plagued older Canon cams). The camera has a 1/1.7″ sized sensor, an f2.0 lens, and a 5x zoom. Personally, I would have preferred to sacrifice the zoom down to 3x and get an f1.8 lens instead, but hey.

The biggest new feature I was waiting for this year was full manual control in video mode, since the main competitor to Canon’s S-series, the Panasonic LX-series, do support this since last year. Canon didn’t give us manual control though. So, according to the manual, here’s what you get with the S100 in video mode:

– 1080/24p @ 38 mbps and 720/30p @ 24 mbps (new).
– Force aperture to open-up with the built-in 1/8 (3 stops) ND filter (new).
– Use external RCA monitor as a recording display (new, HDMI port is playback-only).
– Wind filter for the stereo microphone (new).
– Zoom while recording (new).
– 120 fps slow-mo at 640×480 (new).
– Shoot using preset focal lengths (no step-zoom) (new).
– Exposure compensation (P mode).
– Exposure lock.
– Autofocus lock.
– Manual focus.
– Miniature Mode.
– Auto & Custom white balance.
– Custom colors (set sharpness, contrast & saturation to minimum values for “flat”).

Personally, I will buy one (especially since I gave away my SX200 IS to my brother, so I’m without a good P&S atm). It’s not 100% what I wanted (faster lens, manual video control, additional 1080/30p option), but it’s the closest one out there to what I want. Unfortunately, the LX-series don’t offer enough bitrate and sensible frame rates to me, so I can’t consider them. I expect Canon’s new Digic 5 to produce a clearer picture in video mode than any older Canon camera too (and this includes dSLRs).

MOG/RDIO/Pandora/Online-Radio in your living room, the cheap way

I’ve been trying to convince my husband, JBQ, to get us a Sonos player for a while (over a year now). I pushed the issue again last night, since now we are RDIO subscribers, and we would like to listen to our RDIO collection without having our TV “on” (we currently use our Roku XD|S for RDIO, connected to our main HDTV, and an analog cable from the Roku to our Yamaha receiver that powers our big speakers in our living room). But the $350 price tag of the Sonos (for their cheapest model) is still prohibiting for us.

It was during dinner that my husband had the idea: “why don’t you buy a second Roku, just for music, and connect a PC monitor to it that would be sitting next to our amplifier? I bet it’ll be cheaper than a Sonos“. And of course he was right again:

Sonos solution:
$350 – ZonePlayer 90

Cheaper solution:
$60 – Roku HD
$60 – 17″ 1280px LCD monitor
$5 – HDMI to VGA (or DVI)
$3 – Audio cable to connect to your amplifier/receiver
$2 – Android or iOS Roku remote app (optional)
= $130

There. My husband just saved us all $220 bucks. Enough for two Kindles, to read a book next to your loved one, while listening to music.

To me, the important thing here is that we won’t have to have our TV ON in order to listen to music. I hate having to do that, turning ON a huge 50″ TV just to put an album to play. The unobtrusive, smaller PC monitor can always stay ON, since the Roku has screensaver support. And if it dies after a while, it only costs $60 anyway. As for the Roku, it never turns off, so you can use its remote app on your mobile phone/iPod at any time.

In some ways, this setup is similar to the prototype Be, Inc. announced in 2001: HARP (“Home Audio Reference Platform”), based on their BeIA/BeOS operating systems. Here we are, 10 years later with a similar idea, but in a much smaller size than HARP:

Be’s HARP platform

Of course, the Sonos solution offers other advantages, like multi-room support, no need for an external PC monitor (free Android/iOS remote app, otherwise it will cost you an extra $350), and iTunes streaming support among others. However, if you just want music in a single room only, and you never buy any music (since you either use Pandora, or you now subscribe to unlimited services like we do), this solution is far cheaper and works well-enough. The RDIO app on Roku is crashy, but RDIO knows about it, and I believe an update is pending. MOG, Pandora, Tune-In Radio, ShoutCast, Soundcloud, and MP3Tunes all work great, and more applications are added on Roku as the time goes by. Definitely more than for the Sonos platform.

Personally, I think Sonos needs to either rethink their prices a bit, or move to a cheaper platform (maybe a next generation, cheaper Sonos, based on Android?).

Update: Some folks over at the Roku forum suggested the new AppleTV ($99) or an Airport Express ($95) instead — that is, for existing users of iOS devices that run the latest software version. This way, they can run/stream any of their music apps via their iOS device, and then redirect audio output via Airplay on the Airport Express or AppleTV, that are connected to an audio receiver. The signal is sent encrypted, in the Apple Lossless format, so there’s no loss of quality on the way to your living room’s big speakers.

Since I already have a 4th Gen iPod Touch that supports Airplay, I might wait for a new AppleTV model, or a major software update for the current one (with third party apps and all), and then go for that solution (although my receiver has no optical-in, so that would be another $35 to get a converter). However, for users who don’t own any of the devices needed, that would be $300 to $330 ($200 for an iPod Touch, $99 for AppleTV or Airport Express), so my original suggestion still stands.