Algorithms require time to complete. They also require a a certain amount of memory to complete. Generally, there's some trade-off between the two. If you have infinite memory available, you can probably solve a problem faster.
The exact nature of the tradeoff depends on the problem, and each problem has some absolute minimum time and space to complete. But I wonder if algorithms in general have some fundamental "uncertainty" principle, governing the relationship between the amount of space and time a problem must take.
I don't know much at all about it, but I believe the study of "Kolmogorov Complexity" formalizes some things I will find useful.
Let's say you're working with an array of data that can change over time (let's say the current state takes N bits to store), but you need to be able to remember what the state was at any given past time, too (let's say the state has changed S times). Naturally, you could just save the whole state whenever it changes, so the total space you need is N*S, plus a little extra to note when each state-change occurs.
But what if, instead of copying the whole state whenever it changes, you just made note of how the state changes wherever it changes. If we're working in binary, saving the index of each bit that changed is sufficient. Of course, to calculate the current state, you'd have to go through the entire structure. There's a storage-computation complexity tradeoff.
I've already done a bit of math describing when the latter structure is advantageous for storage complexity. I've also already taken data on how long various operations take. I just need to analyse it, and then write it up into a legible report. :)
Let's just say for the sake of argument that black magic is a thing. In particular, say that when a witch doctor stabs a voo-doo doll, bad things happen to the original. Now, I'm not too familiar with black magic, but I presume there's some ritual to voo-doo-ify a doll, so that, after the ritual, the effigy is bound to a particular person. What if a non-witch stabs the effigy? Do bad things happen? Alternatively, what if a witch stabs a non-bound doll? Do bad things happen? In short, does the magic lie in the doll, or solely in the witch?
There are a couple neat things about this question that I would like to address: first, we don't actually need to believe in magic in order to get an answer. We don't need to show that a witch doctor stabbing a voo-doo doll makes bad things happen; we just need to show that bad things happen more often when a witch doctor stabs a voo-doo doll than when a non witch doctor stabs a voo-doo doll. Second, I don't necessarily need a witch doctor and a voo-doo doll to study this: it can be done from a thorough literature search.
Actually maybe not: people pry don't write down magical encounters that don't actually work.... Obviously this isn't great science in the first place. But don't dismiss the idea just 'cause you don't believe in magic! That's terrible science!
A family friend once told me of an experiment where water samples exposed to different sound patterns crystallize in different ways when frozen. As in, Mozart created a well-structured, pleasant looking crystal, and punk-rock created something downright glassy. The family friend then went on a tangent about how our bodies are made of water, and it remembers the things we do to it, so we really have to take care of ourselves. And avoid punk rock, I guess.
At any rate, it's all very metaphysical, but the family friend said the experiment was on the Internet, so it must be true. It's intriguing enough to be on my to-do list.
A long time ago, I had designed a loose idea of how I thought the process of learning could be modeled in a computer. I'm pretty sure that fuzzy logic, linguistics, and machine learning have far exceeded this, but meh. It's fun.
Consider an "idea", which could be true or false. Now consider a "brain", which believes (or doesn't) any number of ideas. "Belief" in an idea is represented by a probability [0,1]. If belief is 0, the brain is certain the idea is false, and if it's 1, the brain is certain the idea is true.
Now, a brain can learn from "sources". Each source has its own beliefs in any number of ideas, and when the brain interacts with the source, the brain's beliefs can change. How they change, however, is determined by the brain's faith in the source. "Faith" is belief in the particular idea that a given source is trustworthy. The amount the brain's belief changes by depends on the brain's current belief, the source's belief, and the brain's faith in the source.
Interactions can actually be way more complicated. In addition to faith, the belief in the idea that a particular source is trustworthy, you might also have "interest", the belief in the idea that a particular idea is worth talking about. If a brain has no interest in something it's always taken for granted or assumed to be false, it's unlikely to bother paying attention to the source's belief. Similarly, if the source is another brain with no interest in the idea, the two are unlikely to discuss it, and their beliefs in the idea are unlikely to change. You might even split "interest" up into two different parameters, one for "curiosity" of the brain and one for "evangelism" of the source. Simplicity would be preferred, though...
Now that I'm a "Master" of computer science, I know this model is actually similar to "Kohenen networks", which are an actual tool used in machine learning. The idea there is that you've got a bunch of "learning" nodes, and their "belief" influences their neighbors. You allow each node to learn together, and then you wait for them to reach a consensus. This suggests a way to apply (shudder) my model: form a "community" of brains, each acting as a source for the others. Over time the whole community learns together.
I should point out, however, that group consensuses are not the most trustworthy. This model would be very sensitive to psychological phenomena like mob mentality and group polarization. I wouldn't trust it as a decision-making tool. But it might be a cool sociological model. We can craft a social-network community, and seed beliefs stochastically based on some "true" value, and see what happens.
There's no denying that languages evolve over time, and split off into daughter languages, and today's languages are related to one another by this great big language tree. We can draw a comparison to the tree of life, where everything with DNA is presumably descended from a common ancestor. Granted, "horizontal gene transfer" happens a lot with languages, especially with the advent of globalism, but that's a big issue I believe biologists will soon discover they have to sort out with their family trees, too.
Naturally, a detailed understanding of history and the interactions between cultures is required to draw up an accurate language tree, just as a detailed understanding of how species evolve is required to draw up trees of life.
Oh. Wait. No one has a detailed understanding of how species evolve. Our trees of life are based on bioinformatic phylogenetics: first get the "distance" between genomes for each organism - then it's a mathematical exercise to cluster each genome into a hierarchical family. It's an assumption that this reflects actual evolutionary trends.
But never mind my rant on evolutionary biology. Maybe we can do this for languages, too. If we can define a characteristic distance between any two languages, the rest is the exact same clustering process.
Consider the relative frequency p of each phoneme within a language. I hypothesize the distance D between two languages "1" and "2" can be the sum of the differences between each phoneme.
D12 = ∑i |pi1 - pi2|
Of course, certain phonemes are closely related. A better treatment would use a similarity matrix C (just like in bioinformatics) to weight the relation between each different phoneme:
D12 = ∑i∑j [ Cij |pi1 - pj2| ]
We can ask speech pathologists (I know plenty) for guidance in building C: that's a biomechanical, not cultural, entity. The hard part is getting the relative frequency of phonemes in a language. Maybe linguists already have a phoneme frequency chart. If not, we'd need a word frequency chart, and break down each word into its phonemes. If we don't have a word frequency chart, we can make one from a bunch of "representative" samples of the language, if such a thing exists. Of course it'll be hard to truly capture the whole language, since there's generally such a difference between written and spoken.
If you have to work with words, you need to know how to break each word into its phonemes. For some well-behaved languages, like Spanish or Japanese, you can rely on mostly-consistent rules for converting a word into its pronunciation. For others, like English, you'd need a full dictionary mapping each word to its pronunciation. And then, if you're dealing with homographs in a written sample of text, you'd need to guess which pronunciation is appropriate. It's a mess. :)
I want to write a program that takes a statement of predicate calculus and displays a truth table. Yes, I'm certain the smallest amount of searching would reveal an easy tool that does this already, but the goal is to practice implementing operator precedence, which I think is the coolest aspect of high-level programming.
This project is also a great excuse to use Perl, which is my favourite language and I don't use it enough.
Implementation-wise, it should be pretty simple: just a basic conversion from "in-fix" to "post-fix", which is easy to read into an expression tree, and from there I can do whatever I want with it.
I might even be able to extend this program to do fancy things like construct logical circuits out of the truth table, constrained by the available logic gates (like, say you're only allowed to use NANDs). But I might get bored by then.
In my "project management" course, I more or less single-handedly designed a complete software framework for managing RFID tags in a warehouse. The system included a database for tracking what's in the warehouse, a mobile app letting workers update said database, and a desktop app where managers can manage any problems that arise, as well as get useful stats on the database.
The nice thing about the course was that we didn't have to write a single line of code. This was just a get-the-contract exercise. But the thing is, even though we were asking for half a million dollars, I'm pretty sure I can do this whole thing fairly easily.
The only real hard part is getting my phone to act like an RFID scanner. Supposedly there's a device one can plug in to the phone to allow such function - I'll just skip that part and model it with an RNG...
The only part I'm not sure of is whether I can use simple database software without paying for it. Obviously I'm not planning to publish this; I just want to prove to myself it's doable. I'm certain someone has written something for easy SQL practice, and that's all I really need.
I have been fortunate enough to play clarinet with the Towson University Symphonic Band many times throughout my college career. The very first semester I did so, I discovered that I was far outclassed from the very start of rehearsal. The concertmaster plays a note, everyone plays along with it, and then everyone starts adjusting their instruments to fix their intonation.
Now, I know how to tune my instrument. I can even tell, when two people are playing next to one another, if they're in tune or not. But I can't tell when I'm one of the people playing. I'm just a little bit tone deaf.
So one night I wrote a Java applet to help me train my ears. It's simple enough: you can change which note to train on, play a "test tone" you should guess the pitch of, or play a "base tone" which is that note but on pitch. Did it help? No, but it was a fun programming project.
Now that I'm a master of computer science, it occurs to me I can adapt this into a mobile app. I started looking into Android app development some time ago, and I'm confident that, if I get a lot of time on my hands, I can craft a decent game.
Such things do exist, and with an interface far beyond my ambitions, but they are primarily pure tones (which even I can guess the pitch of) or guitar, which is apparently a very popular instrument. I can try to fill the niche for symphonic instruments.
The major obstacle I have now is in producing a clear sound. The problem is either with the waveforms I am crafting mathematically to approximate clarinet timbre, or otherwise just with my computer's sound system. I need to do a lot of studying on sound engineering...
Picture this. A mobile app arcade-style game, where you're flying a rocket from Earth all the way to heaven. You have to dodge sinful things (flying avarice coins, maybe), and collecting holy water gives you a boost. The whole time, Stairway to Heaven is playing in the background. Eventually, your rocket crashes and you wind up in hell.
Uh, yeah. Game over. There's no way to win.
Not, that is, by your own power. But if you turn the volume down (ie. if you stop listening to punk rock) and listen closely, you'll hear God's voice, telling you what to do. Which'll be something along the lines of "let go, let go". After ten seconds or so of the player "letting God play", ze'll just win.
This is basically just a simple arcade game my high-school computer science professor made (first in Java and more recently in an iPhone app) with a theological twist. If I can find that old code it should be pretty easy to make into an Android app.
I went through a phase where I played a LOT of Solitaire (Kodiak, to be precise). That sounds depressing...but of course I approached it as an academic. I only ever played the simplified variant of "draw one card at a time", though. And also in silico. 'cause who plays Solitaire with an actual deck of cards? That's just super depressing.
For the longest time, I abused the "Undo" mechanic - whenever I got stuck, I just tried again. Hard to lose that way. But then one day, I was doing something in real life, and messed up, and my hands flicked out to press "Ctrl-Z", and when I realized what had just happened I swore to never use that feature in my Solitaire playing ever again. True story.
So, my Solitaire strategy evolved to be very, very careful. With the "one card" simplification, there isn't any penalty to cycling through the entire deck. So I can go through the whole deck until I see all my options, and then and only then start to play things. In practice, it's a little freer than that. But I do follow the guiding principle of "Only make a move if you have to." In time I subconsciously developed a whole algorithmic process for playing. And before long, I noticed that, even after my undo-abstinence began, my win percentage seemed to be asymptotically approaching 2/3 or so.
That got me curious: what if I taught a computer to play the way I do, and I let it play as many games as it can overnight. Presumably it too would win 2/3 or so. But what if I let a different computer "cheat", and undo its moves at will? It would be harder to program, but would it in fact have any higher win percentage? I don't think it would...but I want to know!
To interface with my AI, I need to craft my own Solitaire game. There are three hard parts here:
I took a data mining class, and it was good, but it could have been more thorough. That is, we didn't have to implement much of anything; we just used R for the most part. As a theorist, I'd rather write all those methods myself. So...I will!
Of course I won't hold myself to any fancy output. This is just a bit of practice with the mathematics of data mining. I'll have to remind myself what each of these words means, though...
AKA short philosophical rants. Destined for the Musings section of this website.
The titles here are not necessarily the titles they'll wind up with... this is just a sneak peek.
Story idea: a non-spiritual protagonist is asked in a good friend's will to scatter the friend's ashes at various holy sites all over the world. Each site is a new adventure, especially since the curators at most of these holy sites do not consider disposing ashes to be a site-appropriate activity...
Throughout the adventure, the protagonist learns a great deal about hirself, and a great deal about spirituality. I don't know how it ends yet, though.
Besides an overarching plot of a materialist coming to appreciate the importance of spirituality, each chapter hides a lesson on the history and beliefs of a different religion/philosophy.
Story idea: what lifeform can one mad scientist make using a genome of just 288 genes? The number is an arbitrary one, based on a question my father once asked me. But let's follow his adventures and mishaps as he learns to redefine what life means to him.
This story has good potential for hiding lessons on genetics and genes' effect on life. I'm not too sure yet what the overarching point is.
I'd like to compose a piano piece where every phrase is "almost" major AND "almost" minor. What that means exactly is up for debate. I'm hoping I can make use of a geometrical relationship in a "Linear Algebra" I defined for musical phrases (see Writing/Music Theory), and I'm hoping those same ideas lead to ways of generating each phrase. It needs to be fleshed out.
Textbook/course idea: a historical approach to modern physics. Have students derive everything as the theorists who founded our field did, or even replicate experiments as the empiricists who convinced everyone else it was worth thinking about.
Probably this exists already; I just wish I knew where to find it! If you happen to know of something like it, please email me. Otherwise I need to write it myself.
The to-do list is a list of potential chapters to write about, heavily biased by the things I know about... When I sit down and really start this I'll flesh it out more carefully.
Textbook/course idea: a broad but thorough course on how computers work. Essentially it's a computer architecture + networking course, but surrounded by genuine physics lessons on electrical circuitry and radio signals.
The to-do list is a list of units I'd like to write. Each has a math/theory component, and a lab component. The culmination of all labs is a working computer that can broadcast actual TCP signals.
Of course this website will always be under development, but it's no longer at a point where I'm embarrassed to have my name on it. Angular makes everything so much easier. Except when it doesn't, but I'll fix that in the NEXT iteration...
Well, this was never itself a project on the website; it was just a MAJOR stepping stone on the path of writing Ambipathos. Anyways, I feel like I accomplished something, so I'm putting it here.
I have successfully defined several "algebras" for musical phrases, treating them once as matrices, then as polynomials over a finite field, then again as, ehm, functions of pressure density. See the results in "Music Theory", over in the Writing tab of this website.