Eating NP-Completely in McDonalds Again

29 03 2008

Some time ago I wrote about using the knapsack algorithm to make my spendings in McDonald’s as efficient as possible. Ive actually used this program fairly often and it has proven surprisingly useful. However the program always left a certain taste of dissatisfaction in me. Namely, due to how i encoded my value function certain inputs of money e.g. £3.56 gave silly outputs like: Fries,Fries,Fries,Fries. So I returned to the program this weekend. But first some background.

What is the Knapsack Algorithm?

The knapsack problem is a problem in optimization. It is often presented as a problem faced by a thief with a certain sized bag. A thief with a bag of size S breaks into a house. There are items of different sizes and values, what items must the thief take to maximize the value of the items in his bag? Or mathematically: Maximize

\displaystyle \sum_{i = 0}^{t}{v_ix_i}

with

\displaystyle \sum_{i = 0}^{t}s_ix_i \le S

Where for the t types of items, vi is the value of each type of item, si is size and xi is the number of each kind of item.

One way to solve this is to use something called dynamic programming where we calculate the highest values for every size bag and taking into account every item type. For each item we check how it fits in every size bag up to the size S of our bag. Then for every new item considered we check if the value of what was there previous is less than the value of the new item and what is left when we remove items to make space for the new one in the bag. If less we discard old item and replace with new. Recursively in Haskell:

sub = (!!)

s = [3 , 2, 1,5]
v = [6 , 3 ,7,10]

maxval 0 = 0
maxval size = maximum [ v `sub` i + maxval (size – s `sub` i ) | i<-[0..t], s `sub` i <= size] where t = (length s) - 1 [/source] The problem is that this recursive solution is poor performance wise and is not very good for large values. However this can be rewritten to use a table - utilizing memoization to improve performance - where previous values are stored and used to calculate new ones. For every item and every size bag we check (F#): [source lang="python"]if (opt_cost.[cost_n] < opt_cost.[cost_n-cost.[item_n]] + value_c ) then opt_cost.[cost_n] <- (opt_cost.[cost_n-cost.[item_n]] + value_c) best_n.[cost_n] <- item_n[/source] Values across Ranges

To address the old problem of stupid suggestions I tried introducing bounds and making certain items appear only within certain cost ranges but these were not satisfactory. Finally I decided upon a simple solution. Originally one could say that the value v, for each item was a mapping from my internal preferences to the natural numbers. However this was unsatisfactory so I introduced a range given in the form of a tuple to my value function. Thus instead of each item having a value V, it was a number that could take any value between V and W with V <= W. e.g.:

let value = [|(0,0) ; (20, 23) ; (21,22); (21, 23); (65, 70); (60,65)|]

Then within the bag packing function a random number that falls within each respective range is given to value. This minor addition has proven to be the difference between making my little app fairly useful to very useful. A button on the interface ‘New Combo’, generates a new combination of items within the price constraint if I am not satisfied with the current combination. Now I know that for £3.56 instead of 4 fries I can get 2 hamburgers, french fries and a coke.

The thing runs on my pocket pc and was written in F#. And yes the value function concept is very subjective.

meals.png





Functional Programming on .NET part 2

7 12 2007

So most who read this aught to know what a prime number is, a number whose only factors are one and itself. This post is not on primes but does use them in a simple program which uses a simple algorithm to factor any given number. Now, many of you should know the difficulty in factoring numbers and how much of current cryptographic techniques rely on how hard the problem is. The program I wrote was written as a benchmark between the languages F# and Nemerle. Primarily as an “expressioning” benchmark and then as an aside a note on speed.

The task was to write a program which takes an unbounded positive integer and outputs a list that is the decomposition of the number into its prime factors. I decided to leverage laziness to allow me to write a flexible and short implementation. As I will get to later neither of these languages support laziness in the true sense and hence corecursion (in the sense of the dual of recursion) was unavailable, disallowing techniques such as a lazy sieve. Firstly the theory.

Theory

Given some number n, there exists primes (p_1)(p_2)...(p_k) such that n = (p_1)(p_2)...(p_k) and the product (p_1)(p_2)...(p_k) is unique to n. This is known as the Fundamental Theorem of Arithmetic or the unique factorization theorem. I will leave the proof as an excercise to the reader. A way to get to this is to divide n by its smallest factor > 1. Store the factor and repeat on the quotient until we get to 1. Why the smallest factor? Because the smallest factor of n > 1 must be prime. Why?

Suppose that this was false and smallest factor or least divisor of n, say a, was composite. Then we would have that there exists some b and some c such that a = b * c and that b > 1 and b < a (we are working with positive integers so if b = 1 then c = a, remember that b and c < a). So b also divides n (consider n = a * d = b * c * d). But we stated earlier that a was the smallest divisor of n so by contradiction a must be prime.

So as an example we take n = 18. we have a = 9 and d = 2. 3 divides 9 so it must divide 18 and 18 / 3 = 3 * 2. Now to continue.

Suppose we keep n = 18. We know 3 is its least divisor and its prime. so p1 = 3. We do 18/3 = 6.
Next we find least divisor of 6 which is 2. so p2 = 2. Now we do 6/2 = 3. 3 is prime so 18 = 3 * 2 * 3.

So it seems to do this we need to know how to find the least divisor for a given number. Well we know that the least divisor of a number must always be prime. So if given n, to find ld of n we start from 2, check if n mod 2 = 0 and work up till we find a prime that divides n. So for 35 we’d go through 2, 3, and hit 5 as its least divisor. A way to stop unnecessary checks is to leverage the fact that if an integer n > 1 is not divisible by any primes < = \sqrt{n} then n is prime.

Let n = (p)(a). Let us suppose that p is the least divisor of n, we know it must be prime. Now suppose p > \sqrt{n} then a > \sqrt{n} since p <= a (remember p is the smallest divisor). Then we have that n = p \cdot a > (\sqrt{n})(\sqrt{n}) = n which is a contradiction since it says n > n. So at least one of p or a must be less than \sqrt{n}. We know p <= a so we know that at least p < \sqrt{n}. This proves our proposition. Note also that since p \le a, p^2 \le p \cdot a = n. We use that fact to lessen our checks.

The basic idea then is to define an infinite list of all natural numbers and the primes as those numbers which fail some primality test filtered out. We define that tests as any number which is its own least divisor. I will not explain the code in depth – there is plenty of documentation on both languages – this is mostly to show how easy it is to go from reasoning to implementation.

Implementation.

Below is the F# code which implements this functionality:

open System

module Primes_f = 
    let (+:) a alList = LazyList.cons a alList

    let Z3 : bigint LazyList = LazyList.unfold (fun x -> Some(x , x + 1I)) 3I 

    let rec least_denominator_byfactor l n = 
        match l with 
            LazyList.Cons(p,ps) -> if n % p = 0I then 
                                        p
                                   elif p * p > n then
                                        n
                                   else
                                        least_denominator_byfactor ps n  
                       
    let rec prime = function
        n when n  failwith "Not positive Int"
        | n when n = 1I -> false
        | n when n > 1I -> least_denominator n = n
        | _ -> failwith "?"
        
    and primes =  2I +: (LazyList.filter prime Z3)

    and least_denominator n = least_denominator_byfactor primes n
    
    let rec factors = function
        n when n  failwith "Not positive Int"
        | n when n = 1I -> []
        | n when n > 1I -> let p = least_denominator n  
                           p :: (factors (n / p)) 
                           
        | _ -> failwith "?"


To display:

    let dtm = DateTime.Now
    print_any ("Time is: " ^ dtm.ToString() )
    
    //99999988888776655423101   //let z = factors 999988710I//999998888877665542310I    
    let rnd = Random()                                        
    
    let rec Generate tx l = 
        match tx with
            0 -> l
            | _ -> let z = bigint.FromInt32 (rnd.Next(1000000,int.MaxValue))    
                   Generate (tx - 1) ( ( factors z )::l )
                                        
    let list_of_primefactors =  Generate 10000 []//factors 9876543210I  //3 mins 100,000 -  20sec 10,000 .05
    let dtm2 = DateTime.Now 
    let d = dtm2 - dtm
  
    let primes1 = List.filter (fun l -> if List.length l = 1 then true else false) (list_of_primefactors)
    
    let probprimes = System.Convert.ToDouble( (List.length primes1)) / 1000.0
    
    
    List.map (fun c -> print_any c ; System.Console.WriteLine ("") ) primes1   
    print_any probprimes
    //print_any list_of_primefactors      
       
    System.Console.Write ("\n\nTime is: " ^ dtm2.ToString() ^ " Took: " ^ d.ToString() ^ "\n" )  
    System.Console.ReadLine ()


The F# code was pretty straightforward to implement. The language remains quite functional and I had a lazy list data structure which allowed semi-corecursion via unfold. Now the Nemerle code. Because of its more object-centricism (not a critique) I had to approach how I arranged the code a bit differently.

  
module Number
    private least_denominator_byfactor(l : Primes,  n : decimal) : Decimal 
        def p =  l.prime
        
        if((p==3) && (n%2 == 0))            
            2m
        else    
            if(n % p == 0)  
                p
            else if(p * p > n)
                n
            else
                least_denominator_byfactor(l.next,n)          
            
    public least_denominator(n : Decimal) : Decimal 
        least_denominator_byfactor(Primes(3), n ) 
        
    public IsPrime(k: Decimal) : bool
        | n when n  throw ex("Not positive")
        | n when n == 1 => false
        | n when n == 2 => true
        | n when n % 2.0m == 0 => false
        | n when n > 2 => (n == least_denominator(n))
   
    public next_prime (v : Decimal) : Decimal
        def ip = IsPrime(v + 1)
        match(ip) 
            | true => v + 1
            | false => next_prime(v + 1)
        
    public factors(number : Decimal) : list[Decimal]
            | n when n  throw ex("Not Positive")
            | n when n == 1 => []
            | n when n > 1 => def p = least_denominator(n) ; p :: factors(n/p)          

class Primes
    public prime : Decimal
    public next : LazyValue [Primes]
            
    public this (p : Decimal) 
        prime = p
        next = lazy (Primes (Number.next_prime(p)))


To display:

  
module Program
    Main() : void
    
        def dtm = DateTime.Now 
        WriteLine($"Time now is: $dtm") 
                  // 9998876655423101
        //Test Nums: 99998877665542310
        def rnd = Random()
        
        def Generate(tx, l = [])
            match(tx)
                | 0 => l 
                | _ => Generate(tx - 1, Number.factors( rnd.Next(1000000, int.MaxValue) )::l) 
            
        def list_of_primefactors = Generate(152)//Number.factors(9766427) 
       
        def dtm2 = DateTime.Now       
        
        foreach(pfactors in list_of_primefactors)
            foreach(num in pfactors)
               Write($"$num | ") 
            WriteLine("")       */
        
        WriteLine($"Time Now is: $dtm2, Took $(dtm2 - dtm)")
        ReadLine()

While for example in the F# code I had only functions and general functionality I had to package the code into appropriate classes in nemerle and to have behaviour sort of baked into those data types (for example I could not unfold into a generic list of primes, I had to build a next mechanism into the lazy prime class). The nemerle implementation (algorithim not language) is quite a bit less inefficient. But this is because I did not really redress the design to go with objects. For example in F# I could afford mutual recursion since F# has light weight <fast> functions and has the caveat that the lazy list caches its data.

In Nemerle the Primality check in IsPrime requires more and more objects to be created until the test is passed (note the circularity wher ld looks for a next prime which in turn calls ld to find if its prime) so it bogs down quite quickly. Indeed for small numbers (less than 10 digits) the performance is quite close – implying that the actual languages are not far apart in speed. Nonetheless the rate of increase in time usage for Nemerle is much steeper (by a factor of ~14 before change but dropping to near F#’s after for single iterations) than for F# due to the object thrashing I was doing. For example by merely injecting the following code

private ldf(k : decimal,n : decimal) : decimal
        match(k) 
            | m when n % m == 0 => m
            | m when m * m > n => n
            | _ => ldf(k + 1, n) 

private ld(n : decimal) : decimal
        ldf(2,n)  

and replacing the appropriate function in IsPrime, factoring 99998877665542310 went from taking 25 seconds to 3 seconds. A similar injection to F# increased the speed from 2 seconds to 8. F# certainly benefits from the caching in its Lazy list implementation. Both languages are eagerly evaluated except in small localized places. In effect the evaluation is more delayed than it is lazy in the true sense that one would find in Haskell. Thus for example this:

let rec sieve function = LazyList.Cons(z, zs) -> z +: sieve (LazyList.filter (fun n -> (n % z) <> 0I) zs

would fail to terminate when given sieve Z3, while similar code in Haskell knows where to stop evaluation.

Speed

As Noted earlier the F# code was fairly fast (helped, I suspect because of it caching, thus larger numbers benefited from previous computations and in having to do less work. One can really tell over multiple iterations where past iterations aid future ones. For example it took 3 minutes to factor 100,000 numbers between, 1,000,000 and 2^32. While it takes 15 secs (down from ~ 60 secs) in Nemerle to factor 120 nums, it takes F# 3 secs. In the next post I will post on Visual Studio integration, some graphs and data and improvements in the arrangement of code and primality testing for Nemerle.

One note is that F# sometimes generates confusing error messages. This error was due to using an int in place of a bigint i.e. n % 0 vs n % 0I. Error 1 Type error in the use of the overloaded operator ‘( % )’. No operator ‘%’ (full name ‘op_Modulus’) supported by the type ‘bigint’ matches the argument types. See also C:\projects\prime\prime\prime.fs(12,23)-(12,24). See also . Note the nice type system at work =)

As well, sometimes when you dont define your pattern matching cases properly it fails to build yet gives no error. Still some ways to go before being commercial. But hope it doesnt sell out when it does.

Interestingly .NET had no built in arbitrary precision integer data type (had because they just put it in for 3.5). F# does but Nemerle had to use decimal. Note though that to change I but have to rename type annotations. The code is general. Excluding the modules and classes Nemerle code and F# are very much on par.





Numbers + Programs as Proofs

18 07 2006

I wanna cover things many people take for granted from an angle they are not used to. For example, I used to think vectors are directed line segments. Then i thought of them as members of an abstract vector space. Neither of these are correct because they both are…I plan to cover this by going over equivalence classes in a geometry in very clear english that requires no background in math beyond knowing how to count and basic geometry.

I also plan to continue by constructing the integers, rationals and then the reals. Go over interesting numbers like the “Number of Infinite Knowledge” and Unaccessible numbers and how to construct a number that cannot be constructed. And finally the hyperreals, which if youll notice I am trying to write on..they are quite useful for calculus.

I also hope to be motivated enough to cover the surreal numbers (drop back from hyperreals and extend reals with surreals), p-adic numbers (drop back to rationals and complete the rationals in a new way) and a new method i just read about which constructs a system isomorphic (equivallent) to the reals but skips the rationals and proceeds directly from the integers in its construction.
Numbers, numbers , numbers!!

_                ----->  "Eudoxus" numbers equivalent to reals
                |
Naturals --> Integers --> Rationals --> Reals --> Complex
                               |          |
                               |          |====>Hyperreals  <- both contain infinitesimals and infinite numbers
                               |          |===> Surreals
                               |---> p-adic

Programs as Proofs

In other news Im learning Coq, where you construct maths much the same way you write programs. In essence you are programming math theorems. Not only that, your mathematical proofs are guranteed to be logically correct and perfect if they are verified sucessfully (compile properly). This is a major bonus for peer review since they just need to briefly look over your proof (code) to make sure you didnt cheat in definitions (stuff like that) as opposesed to the weeks to sometimes months it takes now. I like this because I have always been enamored of Intuitionsitic logics and constructivism (which applies in this case).

Thanks to this Ive been learning alot about type theory and constructive logics so Im understanding alot of what those people in the Software engineering forums are saying more and more! Further on, thanks to curry-howard isomorphism your programs are identical to proofs. That means that you can write programs as a proof and Coq will extract OCaml (or scheme or haskell) code. Now, not only will your algorithim be guranteed to be syntatically correct, it will also be logically correct. Not only that, the Coq environment is a proof assister and can actually help you out and auto prove simple sections. And because you are working at a very highlevel with incredibly rich mathematical types, the going is just unimaginably fast (im talking many magnitudes here). Imagine it..proving your complete scenegraph in less than an hour (or more dependending on your skills) vs. implementing it in C++. Ill be sure to go over my experience. My first pet project may be a simple matrix library extracted to Ocaml which should work in F# too. I plan to take adavantage of this isomorphism extensively with my main project, prolly improve my effieciency by at least a factor of 20 :D. Hopefully I make up the time im losing learning this.

The only thing about Coq is that there is very little documentation on it. The only book on it costs £40 and there is only 1 tutorial on it on the entire internet. So im mostly having to learn off the reference manual and developments in its standard library. Its tough and slow going.

Some n00b proofs from a tutorial im following. One proves a simple proposition and the other proves that a transitive and symmetric relation is reflective. :

Quote:


Section Tr.
Variable A B C: Prop.Lemma d_a : A -> B /\ C -> (A -> B) /\ (A -> C).
split.
elim H0.intros. exact H1. elim H0. intros. exact H2. Qed.

Require Import utf8.
Section rr.

Variable A B: Set.

Variable f: A -> B.
Variable D: Set. Variable R : D -> D -> Prop.

Section R_sym_trans.
Hypothesis R_sym : ∀x y:D, R x y → R y x.
Hypothesis R_trans : ∀x y z:D, R x y → R y z→ R x z.

Lemma refl: ∀x:D, (∃y, R x y) → R x x.
intros. elim H.intros. apply R_trans with x0. assumption. apply R_sym.
assumption.

Also check out this link which contains lots of interesting interviews with great people. Lots of stuff to learn from there People’s Archive





Numbers: Part 1

9 07 2006

In this post I go over why numbers do not exist. Since in a way, we may say that they are pulled out of thin air. People in the olden days liked to lock themselves into certain dogmatic ways of thinking, something which has cursed the future generations into having more trouble with math concepts than need have been. You can tell the close mindedness of the people of old by their insistance to have such names as postive, negative, natural, rational, irrational, real and imaginary for number types.

The task of this post is to show that the imaginary number i is no more or no less real than the number 4. It is often that you will see people say such things as how can you have the square root of a negative number? Some even go as far as to ask how can you have negative money? This is due to a misunderstanding of what a number is.

On the first, how can you have issue with the root of a negative number? This is simply due to centuries old dogma passed traditionally from teacher to student in grade school with the idea that the positive integers are real or exist somewhere and that only they can be numbers. We are taught properties of integers and are told that only those that follow these rules are numbers. How silly. The second on negative money may be considered to be that the positive integers associations are more concrete. 5 -> cats. 1000 -> money. Numbers are no more than an abstract invention or tool we may attach to appropriate situatations which help us reason on certain things we encounter in nature.

We say 5 boxes and attach the concept of 5 to boxes which enable us to envision the multiple boxes. At 45443454 we can no longer even envision the boxes and simply have another concept, a vague notion of many boxes: 454454 -> vague concept -> many concrete objects: boxes. Attatchments are possible with negative integers as well. Consider negative boxes, we denote a lack. -4 -> concept of needing 4 boxes. We can never make our way down to the concrete object box. But the relationship with concrete objects are still obvious. As we go higher and higher we are less able to associate with concrete objects and the relationship is less visible: say with some number q which is a quaternion for example. This is not so easily associated and we have less concrete experience with such a concept and so feel it to be less natural, where infact the positive integers are just as “fake” or “real” as the the quaternions. We are used to handling integers, the path from the positive integer concept to a concrete object is quite simple, creating an illusionary concept of concreteness. The more complex our numbers the less we take them for granted and the more distrustful we are (their path to experience is too complicated), conversly for simple numbers like the positive integers we have little difficulty, in fact so little it may be considered that we cannot dissasociate positive integers from objects. This causes problems on what a number is and creates misconceptions on what a number may be and in the future resulting in us being mistrustful to treat perfectly valid numbers as numbers.

General sketch on how to construct the natural numbers

One way to construct the natural numbers due to Von Neuman is to use set theoretic formulations to define them. I go into it very informally. Consider the empty set ∅ (a set is an aggregration/conglomeration of a collection of objects. A pack is to set as a set’s members is to wolves, a pack is a conglemeration of wolves). The empty set has no members. It is itself a an existing thing but has nothing whatsoever as its contents. Basically a collection of pure nothings. So we take the empty set and from there make a set with that as its member. We then make another set that has the empty set and the set(s) we have made as its members. We do this ad infinitum. This construction has that each stage in our constructing, the current set in question has all the previous sets as its members.

So we have ∅ -> {∅} -> {∅ , {∅}} -> {∅ ,{∅}, {∅ , {∅}}} -> …

The Set (as an implentation of the peano axioms which tell us what it takes to be considered the natural numbers) which contains all these sets is equivalent to isomorphism to (basically the same as) the natural numbers. Yes that is:
∅ = 0
{∅} = 1
{∅ , {∅}} = 2
{∅ ,{∅}, {∅ , {∅}}} = 3

This has the interesting property that the number 52 is inside 100, the number 52 is contained in 100. In fact the number 100 contains all the numbers previous to it inside it (as members). The Natural numbers, in essence, under this method is basically made out of nothing. We just have collections of empty aggregrations. That is our natural numbers that are so real and obvious.

As a final excercise consider the Real Number Line. Try and point out the location of π on it. We may get arbitrarily close to it, zoom in closer and closer but we will never arrive at it or equivallently, we do so at infinity. How is that “real” number any more real than the square root of -1?





The Benefits of Jabberwocky

17 06 2006

I was looking up some quick info on p-adic numbers and algebraic structures (irrevelant but provides a background) when I happened upon this page on the Monstrous moonshine conjecture. Here is a quote on etymology:

Quote:

The Monster group was investigated in the 1970s by mathematicians Fricke, Andrew Ogg and John G. Thompson; they studied the quotient of the hyperbolic plane by subgroups of SL2(R), particularly, the normalizer Γ0(p)+ of Γ0(p) in SL(2,R). They found that the Riemann surface resulting from taking the quotient of the hyperbolic plane by Γ0(p)+ has genus zero iff p is 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71 (that is, a supersingular prime), and when Ogg heard about the Monster group later on and noticed that these were precisely the prime factors of the size of M, he wrote up a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact.


… I am not going to explain what this means considering with words like supersingular prime, I only barely vaguely understand what it states (algebraic topology, it looks – studying quotient groups of hyperbolic plane – an example of a structure formed by taking a quotient of a more general structure, say a plane is a torus) but there is a simple point in this.

This explanation for the origin of the name seems like reaching, it is akin to a backronymn as an example of something explained after the fact to match the event. A more likely viable explanation is that offered prior:

Quote:

The term "monstrous moonshine" was coined by Conway, who, when told by John McKay in the late 1970s that the coefficient of q (namely 196884) was precisely the dimension of the Griess algebra (and thus exactly one more than the degree of the smallest faithful complex representation of the Monster group), replied that this was "moonshine" (a word that is used in English as a moniker for crazy or foolish ideas). Thus, the term not only refers to the Monster group M; it also refers to the perceived craziness of the intricate relationship between M and the theory of modular functions.


The point of this post is two-fold. To show how psychological notions affect how we name things and then how subsequent generations interpretate things based on schemas in use when introduced. This in turn shapes the evolution and form an entire subject takes. For example, Conway percieved a coincidence and craziness in what logically follows from a set of premises. The idea of craziness, naturulness, obviousness have no place in the mathematical world. These things are significances which are entirely imaginary, reflecting biases and due to a lack of full understanding (due to the newness only not any lack of skill, Conway is a flipping genius). Unfortunately the stigma introduced into a foreign new concept often colour it in a way which make it unclear, this is then carried to subsequent generations. Adding baggages of understanding which need not exist.

Secondly, by carefuly choosing a name much confusion may be elimanated in the introduction of a subject. Just because it looks like a duck does not mean it is a duck. It may be a sheep in duck's clothing. The name monstrous moonshine group is sufficiently ambigious to be a good mathematical name. By naming it as such, people do not enter into the subject with preconceived notions due to name sharing and thus they do not wrongly or inappropriately transfer old concepts into this new schema which in the future cause dissonance and conflicts, resulting in confusion and disunderstanding. An empty cup is easier to fill with clear water than one partway filled with mud.

It is important to name new concepts carefully and in a way which discourages pre-assumptions.





Limits from Beyond

16 06 2006

I just began to write this paper on limits which covers them from a non common direction with the aim to have them be more thoroughly and intuitively understood. Ive still got a bit to go since I wish it to be self contained. A snippet:

A set may also contain itself as a member. So we can have D = {a,b,c,D}. Now, consider a set S={Any set which does not have itself as a member}. So we know that D is not a member of the set S since it has itself as a member. But is S itself a member of this set? Is it a member of itself? Can we have {…,S}?

If S is a member of itself then obviously something is wrong since the requirement of membership is a set must not have itself as a member to be included. But then if we say it is not a member of itself then there is still a problem. Why? Because then it would meet our requirement of a set that does not include itself as a member. Therefore it would in fact be a member of itself! This is obviously a paradox

All these though are solutions to problems offered to make the system which we are working with more consistent and free from problems. Mathematics in this sense is like a patchwork quilt. We sew, sew and if we sew in such a way as to tear our quilt we must patch it up lest the poor person who uses our quilt freeze to death. There are many implications to this paradox and the assumptions required to resolve it (assumption of axiom of choice or well ordering for hierarchy) or ability to prove no strong formal mathematical theory may ever be complete but these are outside the scope of this paper.





Mathematical Landscapes

14 06 2006

How will this project turn out?

So I been talking about the game being based on logic and involving theorem proving in a limited sense. But how to do this? Well the general idea was to create a simple formal system and then figure out a way to port the system to a graphical logic representation. The methodology I am thinking of following is that laid out in this paper. This would be for spell creation of which there will be two types. General spells and creation of planes. Spells will be first implemented and it will be my first experimentation of this concept. Spells will be based on my simple formal system and there will be a mapping of theorems to spell aspects and will be a way to get my feet wet for the proper hardcore aspect of the project.

The other type, planes, allow one to map theorems and properties to textures, landscape creatures of an area… You may create a mapping for the system yourself or simply use the default one I provide. The logics backend will be implemented in F# and require F# assemblies, so there will be no Domain specific language. Different logics will allow different directions of exploration, for example, with modal logic we explore concepts of existance and possibility.

The premise itself is very simple, if you map the mathematical structures and object to more farmiliar things, drop down a level an abstraction by entering another, you will be able to enter realms of thought once too difficult to penertrate. In my head i have a dream, The plane, based on your mapping could represent a theorem on the reals. By setting off an explosion or jumping and watching the results you can gain insight into what to do…

Ill have to buck up on my graph theory, combinatorics and graphical logic concpets though.

sys.jpg

Instead of defining behaviour and maps explicity we create in a world whose final output is not known to us. It would be truly like exploring a mathematical landscape, a step towards mathematical visualization..