Forum Archive

Problem with list comprehension

Phuket2

Sorry, I am asking this question in this forum as its not a Pythonista problem. But I have looked everywhere for a solution. It should be so simple, but I can't get my head around it. So against my better judgement, I am still going to ask for help here.

The code

# coding: utf-8

from collections import namedtuple

Resource = namedtuple('Resource', ['id', 'resid', 'key', 'ord', 'value', 'value1', 'data', 'pickled'])

Resource.__new__.__defaults__ = (None, 0, 'MISSING', -1, None, 0, None, False)

tb_def = Resource(id = 'INTEGER PRIMARY KEY', resid = 'INTEGER UNIQUE', key = 'TEXT', ord = 'INTEGER', value = 'INTEGER', value1 = 'TEXT', data = 'TEXT', pickled = 'INTEGER')

_quote = "'"

_table_sql = ''' CREATE TABLE IF NOT EXISTS {0} {1}'''.format('{0}', tuple([(tb_def._fields[i] + ' ' + item).strip(_quote) for i, item in enumerate(tb_def)]))
print   _table_sql       

The output
CREATE TABLE IF NOT EXISTS {0} ('id INTEGER PRIMARY KEY', 'resid INTEGER UNIQUE', 'key TEXT', 'ord INTEGER', 'value INTEGER', 'value1 TEXT', 'data TEXT', 'pickled INTEGER')

The output is correct except for the quotes inside each item. I want to use _table_sql later to create a sqlite3 table. It will,work if I can get rid of those pesky quotes.
I know I can do it other ways with multiple lines of code. But I am trying to to things more correctly. I think list comprehension is a great solution for what I am trying to do here as long as I can get it right. I am sure it's so simple, I just can't see the answer.

Again sorry to ask it here, but it's driving me crazy

ccc

str.join() is your friend.

_table_sql = ''' CREATE TABLE IF NOT EXISTS {0} {1}'''.format('{0}',
    ', '.join(tb_def._fields[i] + ' ' + item for i, item in enumerate(tb_def)))

Extra Credit: It is no longer a list comprehension... What is it?

Phuket2

@ccc , thanks a lot. Yes, now join is acting as an iterable. So many things to remember :( I had convinced myself that list comp. was the right pattern. I hope I live to 100++, I will need to get it.
Thanks again

Webmaster4o

The quotes just mean the values are part of a string, the quotes aren't actually PART of the string.

Phuket2

@Webmaster4o , when you pass it as an Sql string to create a table, its goes wrong. You don't want the quoted strings. Its Not just a representation, they are there

Phuket2

@ccc , str.join() is not my friend :) Now I am going back though all my code questioning myself :( But, its hard medicine to swallow, but better i take it now i guess :)

ccc

The answer to the extra credit question was: generator expression.

Generator expressions create tuples while list comprehensions create lists. Tuples have a 12 byte overhead while lists have a 20 byte overhead. Additionally, generator experessions create each item on demand while list comprehensions create all of the items at once and store them in RAM. So, whenever possible, use generator experessions for speed and memory footprint reasons.

str.join() works really well when you are trying to make one long string out of the items in an iterator.

dgelessus

No, generator expressions do not create tuples. As the name indicates, they create generator-iterators. If you want to use a comprehension to create a tuple, you have to say tuple(x for x in y). The advantage of generator expressions over normal list comprehensions is that their "elements" are not all stored in memory at once (at least not by default - the function you're calling still might). This means that for example in your str.join case a list comprehension will first store all the strings to join in a fresh list, whose elemens are then joined and then the list is discarded. With a generator expression str.join will simply iterate over it without storing all elements beforehand.

Phuket2

Never mind, I seem to have endless ways to embarrass myself publicly. Lucky I have a thick skin 😀
I am not getting huge amounts of time to practice at the moment, as so many friends are visiting.
But I am getting a lot better. But I am not a point were I can just know what pattern I should use to solve a particular problem. Ie list comp, gen, iterable etc...
All these problems I can solve very easily with code blocks. But that's not learning the language. Can be frustrating to spend so long on a small problem when you know you can just write it another way. But regardless, I am going to persist to get these patterns clear in my mind so it will become a no brainer to which pattern I should be using.
Oh and one day, I am going to write something useful here for the community, or I will die trying ;)
Again, thanks for the help guys.

Phuket2

I hope I am not embarrassing my self further. Maybe I will. But here goes nothing.
I have been trying to abstract access to SQLite for use as the old style resource manager that the old macs used to have.
Below is a small example of the code.
Personally, I feel I am doing ok. As I say, it's a small part of what i have done, but it's the most important part. I am trying to get a nice level of abstraction.yeah, but who knows, maybe I am creating crap again.... But I will continue...

# coding: utf-8

from collections import namedtuple
import sqlite3
from random import randint

from faker import Faker
fake = Faker()


my_def = {'namedtuple_name': 'REC',
              'field_names' :[('id' , 'INTEGER PRIMARY KEY'), ('resid','INTEGER UNIQUE'), ('key','TEXT') , ('ord','INTEGER'), ('value', 'INTEGER'), ('value1','TEXT'), ('data','TEXT'), ('pickled', 'INTEGER'),]
                ,}


'''
my_def = {'namedtuple_name': 'REC',
              'field_names' :[('id' , 'INTEGER PRIMARY KEY'), ('resid','INTEGER UNIQUE'), ('key','TEXT') , ('ord','INTEGER'), ('data','TEXT'),]
                ,}
'''

MY_REC = my_def['namedtuple_name']

MY_REC = namedtuple(my_def['namedtuple_name'],[fld[0] for fld in my_def['field_names']])

MY_REC.__new__.__defaults__ = tuple((None for x in range(0,len(MY_REC._fields))))

mytbl_def = MY_REC._make(val[1] for val in my_def['field_names'])


_table_sql_new = '''CREATE TABLE IF NOT EXISTS '{0}' ({1})'''.format('{0}', ', '.join(mytbl_def._fields[i] + ' ' + item for i, item in enumerate(mytbl_def)) )

insert_pattern = '({0})'.format(','.join( c for c in str('?' * len(MY_REC._fields))))

_insert_sql = ''' INSERT INTO {0} VALUES ''' + insert_pattern

if __name__ == '__main__':
    db_name = 'test.db'
    db_table = 'table_c'
    db_num_recs_to_add = 51
    db = sqlite3.connect(db_name)
    db.execute(_table_sql_new.format(db_table))
    # using randint() for testing...resid is unquie 
    for i in range(1, db_num_recs_to_add):
        r = MY_REC(resid = randint(1, 500000), key = fake.city(), data = fake.first_name())
        db.execute(_insert_sql.format(db_table), [v for v in r])

    db.commit()
    cur = db.execute('SELECT * FROM {0}'.format(db_table))
    for row in cur:
        print repr(row)
    db.close()


dgelessus

@Phuket2 Most likely what you're writing isn't crap - and if it is, at least it's crap based on a good idea, and you'll know how to write less crap in the future. :) Trust me, I've written a lot of (mostly unfinished) crap in Python, which is also where most of my knowledge of random Python things comes from.

Some general comments:

When writing long nested structures like my_def, don't be afraid to use line breaks. Try putting every tuple in field_list on its own line, it helps with readability a lot.

For what you're trying to do with mytbl_def, a dict might be a better solution. The main uses of a namedtuple are similar to that of a struct in C - a sequence with a fixed set of named elements that are not necessarily of one type. In your case it looks like you want the elements to be dynamic, which is what a dict is good for. If you make my_def["field_names"] (currently a list of name-value tuples) into a dict, you wouldn't even need the extra variable - you can simply use my_def["field_names"].keys(), .values() and .items() instead. If you need to keep the ordering, use collections.OrderedDict.

About the insert_pattern line format: don't forget that strings are just sequences. You can write ", ".join("?" * count) to join count question marks with commas.

When connecting to the database, you should use the with statement:

with sqlite3.connect(db_name) as db:
     pass # do stuff

That way the connection is always properly cleaned up (and your code looks cleaner).

Phuket2

@dgelessus , Thanks so much for your feedback. I was really making life hard for myself :) but I did read a lot about nametuples being good for use with SQLite. That's why I went down that path. But was not using my own brain. Maybe some might still say they are better. But as you point out, it seems like you have to jump through a lot of hoops to coerce things the way you want them to work. I reworked the code Based on your feedback. I think I did it ok, but I could be still doing somethings wrong. But I feel better about the dict version vrs the namedtuple version. Mind you, I do like the dot notation possible with namedtuple's.
Again, what I posted is only a portion of what I am writing. I have a class to wrap this up in. This version I added the dict_factory, I also had one for the namedtuple version also.
Hmmm, about with and conn, it's a nightmare :)
I implemented enter and exit methods in my class, and that's all great if your object is called using the with clause. But if not, what to do. I looked and looked for strategies to deal with this. I could not find anything.
It seems to me it would be excellent if a class could be told, hey you should always call enter and exit wether or not with is used. I did also look at the ContextManager and using decorators, that also does not seem to provide an elegant solution. I also realise the connection object itself implements enter and exit. I also realise it can handle nested calls. I tried using with conn in every method in my class that used a connection. Whilst things didn't break, I felt out of control. In the class I tried to break down the methods into reusable code, as you do. So just getting a count of records from a table etc... But with this modular approach you end up with many levels of contexts. While I didn't get the gravity of really what was going on, I had the feeling I was creating a big problem for myself.
Given that I am just trying to write a simple app storage utility, I decided to keep the connection open. I have a method to close the connection also. But every method that requires a connection, requests one from a method. If there is a open connection we return that, otherwise reconnect to the Database save the connection in a attr and return it. So, the user if the class does not have to do much. Eg, if you want to read or write some values on startup, you can do so and just call close. Next time you call a method on the class, if the connection is closed, it will just reopen it. Conversely if you are using it throughout the app, no real need to close it as its a single connection.
Also doing other various things for bulk inserts/updates etc to avoid commits after each execute. I know can use executemany etc... Sometimes not convenient when going though a class. I have done nothing with transactions yet. Later :)
Sorry, I know I went on about this. But this context manager can do your head in. Look maybe I have it all wrong. But I did spend a lot of time looking and reading. Maybe the wrong things :)

But again thanks for your suggestions and help. I really feel, it helped me a lot.

# coding: utf-8
# coding: utf-8


from collections import OrderedDict
import sqlite3
from random import randint

from faker import Faker
fake = Faker()


db_def ={

        'db_name': 'test.db',
        # some other fields to come later...

        'flds' : OrderedDict((('id','INTEGER PRIMARY KEY'),
        ('resid','INTEGER UNIQUE'),
        ('key','TEXT') ,
        ('ord','INTEGER'),
        ('value','INTEGER'),
        ('value1','TEXT'),
        ('data','TEXT'),
        ('pickled','INTEGER')))
}


# diervived from our db_def[field_names]
REC = OrderedDict((attr, None) for attr in db_def['flds'].keys())

_table_sql_new = '''CREATE TABLE IF NOT EXISTS '{0}' ({1})'''.format('{0}', ', '.join( '{0} {1}'.format(k, v) for k,v in db_def['flds'].items()))


insert_pattern = '({0})'.format(", ".join("?" * len(db_def['flds'])) )
_insert_sql = ''' INSERT INTO {0} VALUES ''' + insert_pattern

def new_record(**kwargs):

    # not sure if i can do this better or not.

    # create a empty record with all fields set to None
    #rec=  OrderedDict((attr, None) for attr in db_def['flds'].keys())
    rec = OrderedDict(REC)
    for k,v in kwargs.iteritems():
        if rec.has_key(k):
            rec[k] = v

    return rec

def dict_factory(cursor, row):
    #d = OrderedDict((attr, None) for attr in db_def['flds'].keys())
    rec = OrderedDict(REC)
    for idx, col in enumerate(cursor.description):
        rec[col[0]] = row[idx]
    return rec

if __name__ == '__main__':

    # not that is really matters in this case, but because the id's
    # are different, i have create a new copy of db_def because i
    # called dict(db_def), if i just do mydb_def = db_def ids are the
    # same. makes sense. Simple stuff but easy for us newbies to slip
    # up on these small things.
    mydb_def = dict(db_def)
    print id(mydb_def), id(db_def)

    db_name = mydb_def['db_name']
    db_table = 'table_c'
    recs_to_add = 2

    conn = sqlite3.connect(db_name)
    with conn:
        conn.execute(_table_sql_new.format(db_table))
        # using randint() for testing...resid is unquie
        for i in range(1, recs_to_add):
            rnd_resid = randint(1, 500000)
            r = new_record(resid = rnd_resid, key = fake.city(), data = fake.first_name(), bad_keyword = 'bad info')
            print r.values()[0]
            conn.execute(_insert_sql.format(db_table), r.values())

        conn.commit()
        conn.row_factory = dict_factory
        cur = conn.execute('SELECT * FROM {0}'.format(db_table))
        for d in cur:
            print d
Phuket2

Oh, the idea is that if I ever actually finish it (so far my track record is not so good) is also to use UI to create an user interface for various types of Resources so to speak. Again, the same as the old Mac resource manager. But to try and write optimised versions of each data type is too daunting and un appealing to me. That's why I am going to use a one record definition fits all approach. Otherwise I would be here for a 100 years. I don't expect it to be either super efficient in regards to storage or lightning fast. But I expect it to be reasonably flexible and a decent speed. The idea it will pickle and un pickle data fairly effortlessly. Pickle types of course. Dates, I know I have to write some converters for these. Read about it, but have not dived into those yet. Storing dates seems easy enough, have to make sure can apply SQL date statements on them correctly.
You could say an enhanced ini file etc...I have done some tests with many 1000"s of records , it's still ok. But have to see when finished.

But on this adventure another thing did occur to me that would provide something similar to what I am trying to do but in a more consistent way. I will just mention @omz here, so maybe he sees it 😛
But my idea is that, if @omz had a document types you could create same as you create a .py file. But this would be a temple file that opened in the editor. That temple could provide you a simple interface inside the editor to add data rows. Of course you some how describe the required fields and types. Then when you save the file, it's just saved as a json file. I know there are editors around to do this already. But just so cool,if it was built it. Nothing is being broken or proprietary as just json files are being saved. Of course the temple could have a mini definition language as well to extend it. With that foundation in place easy to write some wrappers for persistent app prefs etc... Ok, maybe I am dreaming. But really what I think makes Pythonista so special is its ease of use and attention to small details. I think the best implementation would be if @omz could provide a template frame work and let the guys here develop the templates. Look, maybe I will regreat writhe the whole last paragraph as I know easy to say, just do it in the UI. But having it built in to the editor has a certain appeal to me. Ok, my 2 cents worth on that topic :)

Phuket2

I just seen this and made a chance to the REC creation,

#REC = OrderedDict((attr, None) for attr in db_def['flds'].keys())
REC = OrderedDict.fromkeys(db_def['flds'])

Ok, a lot cleaner using fromkeys

dgelessus

Your with looks just fine to me. (Small nitpick - you don't need to assign conn = sqlite3.connect(db_name) by hand, you can write with sqlite3.connect(db_name) as conn to do both things at once.) The point of the with statement is that you can clearly say when you don't need an object anymore (when the block ends) so that it's guaranteed to be cleaned up safely. When you write a context manager, you'd probably also make __del__ call __exit__ as well, that way cleanup will still happen if a context manager isn't used.

If you have functions or methods that work with an open connection, you'd have them accept conn as a parameter. The function itself only deals with operating on the database, it doesn't commit the changes or close the connection. That's what the main code should worry about. You're right, it's not a good solution to have every function open the connection, do its changes, and close it again. If you did that and had to change how the database is loaded or the changes are committed, you'd need to change every function.

def destroy_evidence(conn):
    # ONLY USE IN EMERGENCIES
    conn.execute("DROP TABLE confidential")

if __name__ == "__main__":
    with sqlite3.connect(db_name) as conn:
        destroy_evidence(conn)
        conn.commit()
Phuket2

@dgelessus , I just made this change to the dicts. I think this is a lot better. not sure what you think, but have hoisted another level of abstraction in my mind and I should be able to associate most of the table style SQL statements together with the data.
I personally think this is a big step forward....

Getting excited 😋 Is funny to say since I am sitting in a ago go bar in pattaya trying to learn python 😬 But we all can't be the same

__def_flds = OrderedDict((('id','INTEGER PRIMARY KEY'),
        ('resid','INTEGER UNIQUE'),
        ('key','TEXT') ,
        ('ord','INTEGER'),
        ('value','INTEGER'),
        ('value1','TEXT'),
        ('data','TEXT'),
        ('pickled','INTEGER')))


db_def ={

        'db_name': 'test.db',
        'flds' :__def_flds,

        'table_create': '''CREATE TABLE IF NOT EXISTS '{0}' ({1})'''.format('{0}', ', '.join( '{0} {1}'.format(k, v) for k,v in __def_flds.items())),

        'table_insert' : ''' INSERT INTO '{0}' VALUES ({1})'''.format('{0}','{0}'.format(', '.join("?" * len(__def_flds)))),

        # and so on for sql statements....


        '''
            althogh not a big deal.... i will get a type of caching also this way. my sql statements will be evaluated once. i will still have another step for param statements, via .format().

            but at least to me, this seems a better way.
        '''


        # i dont think this is smart, not sure yet
        'REC': OrderedDict.fromkeys(__def_flds),


}

Phuket2

@dgelessus , with the 'with' on connections also have issues about when you assign a row_facrory etc..
I was just playing around with iterdump, not in my class, just my example. I couldn't understand why it wasn't working. I was getting an error, too many values to unpack. I finally worked it out. I had set the row_factory on that connection. As soon as I set it to None, it worked as expected. In my class, every time a connection is requested, whether it exists or is created I explicitly set the row_factory to None. But I think it's a mine field. But, will keep reading and listening until I get the eureka moment (hopefully, I do get it)

ccc

@Phuket2 Looking thru this thread, I have to agree with @dgelessus advice:

When writing long nested structures [...], don't be afraid to use line breaks.

We know that you have that widescreen iPad but you don't have to rub it in ;-)

fields = ', '.join('{} {}'.format(_def._fields[i], item) for i, item in enumerate(mytbl_def))
_table_sql_new = '''CREATE TABLE IF NOT EXISTS '{0}' ({1})'''.format('{0}', fields)

Might be a bit easier to view, understand, and debug.

Phuket2

Ok guys, sorry about the long lines. It's just the line wrap works so well in Pythonista I never think about it. I went in and cut up the lines, because have to cut in some awkward places makes less readable inside Pythonista. Anyway, I take your point and will try to break up my lines

Phuket2

In the dict I had ...

  # i dont think this is smart, not sure yet
        'REC': OrderedDict.fromkeys(__def_flds),

As a part of my dict. You can see the comment. I was not sure if smart or not. Actually, I think it's ok.
The definition of the record, stays with the data.
When I want a new record, I do:
Rec = db['REC'].copy()
I know small thing. That's why it takes me so long to get anything done. I ponder over everything. I am sure too much.

Phuket2

I made another change to the dicts

db_def ={

        'db_name': 'test.db',
        'flds' : __def_flds,
        'SQL' : __def_sql, 

        # rec template, we make copies of this
        # db_rec['REC'].copy
        'REC': OrderedDict.fromkeys(__def_flds),

}


So moved the SQL statements into their own dict. I was lucky in my class I have a method to get SQL. So it was an easy change. But again, a little cleaner and more possibilities later as I know what the SQL statements are.'

def get_SQL(key):
    if db_def['SQL'].has_key(key):  
        return db_def['SQL'][key]
    else:
        raise KeyError
ccc
def get_SQL(key):
    return db_def['SQL'][key]

This refomulation delivers the same functionality. Dict.has_key() should almost always be avoided. It does not exist in Python 3.

dgelessus

Probably because it's exactly equivalent to key in mapping. There probably are some use cases, but in Python the preferred style is to except KeyError rather than checking for the key in advance. (When working with multiple threads this also becomes more than just a style question, because in between the in check and the dict item lookup another thread might have deleted that key. No need to worry about the details unless you're actually writing multithreaded code, just use exception-catching - sorry, excepting and you'll be fine.)

Phuket2

Thanks guys, I have changed it to read

def get_SQL(key):
    try:
        return db_def['SQL'][key]
    except:
        raise KeyError

But truthfully I haven't got my head around The Python error system yet 😰 I jump to too many other things like a little boy in a candy shop
I have some basic concepts, but not the understanding I need. But I still have a good grounding in coding techniques, so the concepts will not be foreign to me.

Regardless, learning python is a fantastic journey. In 2006, I did a journey. In about 4 months travelling non stop, I visited 50 branches of a club I belong to. That was 36 countries. Each branch of the club I had a fixed date to meet them on to have a huge night out. With airlines going out of business , typhoons etc..., I didn't miss one meeting. That was a fantastic journey.

I am loving my Python journey just as much

ccc

I guess that I should explained the logic a bit more completely.

The try, except, and raise lines are not required because if not key in db_def['SQL'] then KeyError will be raised automatically .

empty_dict = {}
empty_dict['a']  # will raise a KeyError
Phuket2

@ccc , thanks. But it does remind me of one my older friends learning thai language. He stopped learning thai after our teacher tried to explain to him about the magic 'o' in thai. If it's a long 'oo' it's written in thai, not like that, but it's written. If it's a short 'o', it's spoken but not written. It's a magic 'o', you just have to know it's there.
Python has some similarities 😬

ccc

I liked your story of o but please do not give up on learning the language ;-)

Phuket2

@ccc , thanks ccc. I don't give up on thai or python. Met a guy last night through another friend, and he writes in c#. he does a lot of backend work and he lives here in pattaya.
He is happy to meet up sometimes with me and just talk about programming, so that will be nice. He has a lot of experience with SQL and databases. I am very weak in this area. I have been reading a lot, but so nice to be able to discuss it with someone in person.
I have been trying to understand all I can about SQLite. In the cave man days I wrote simple versions of b tree databases in Pascal. Even the guy I met last night thinks I am going too deep into it because of the ORM's around today. But my nature is to try and understand what it is I am doing. I think if you were breast feed on the early languages a long time ago with limited ram and processor power, you have the feeling you need to understand. Really back in the old days, one reason our software was so successful was because we wrote our own data file formats. Our competition either used huge enterprise databases (Oracle, etc...)and expected the users to manage them or Microsoft access (DAO, at that time not RDO' ADODB ETC..). But they failed. Not small companies, HP, GM. We just brought thier business from them and transformed their products into our products. Another simple thing in those days, I did all the screen drawing with double buffers and bit blitting. Many layers to handle on to of bitmap image like drawing primitive shapes. Also had to implement a selection lasso. Also used different xor , blend techniques to affect the underlying bitmap to indicate selections. On update events only ever bit blitted the affected regions back to the screen. In these old days this attention to detail made all the difference. Of course I wrote all this in c with the occasional asm routine. Also, because of our file strategy, our network traffic was in out control and easy to handle. Our competition often played havoc on Fords (automotive) and other big companies intranets. They also could not control the DDL Versions of shared Dll's between releases creating even more havoc on the intranets. Sorry, I am not trying to say how great I am. That's just what we had to back then. I find myself in a new world today. It's really exciting. But I want to know how things work at the end of the day. I don't use the time I use too these days as I did when I was younger. But again, still a fun and stimulating journey. Just it's taking me longer. I am guessing ccc you have had a similar up bringing with programming. Not sure, but I think so

ccc

In high school we had a computer room with four "terminals". NOT with screens but just a printer with a keyboard and a paper tape gizmo (not that identical model but close enough!!). I hacked on it but I don't think I was actually using any programming language.

I tried a class in Lisp in University... Dropped that in the first few weeks and studied Philosophy instead. I never did get back to Lisp although these daze folks rave about Clojure, Hylang, et al.

Basic on a Commodore 64, Basic on Apple IIe, RPG, COBOL (Loved it), Basic Assembly Language on mainframes (I bet that @MartinPacker can probably still do BAL!), C, Pascal (loved it) [First edition of Inside Macintosh worn threadbare from overuse], C++ (loved it!) [Metrowerks CodeWarrior!!], Modula2, HyperTalk --> AppleScript (cool but painful too), sh/bash/etal (hated em!!), Java (I actually went to work at Sun Microsystems because I loved it so much but I never got great at it) [I was super into the Jini and the secure mobile code aspects of Java], JavaScript (less cool and more painful), Objective C (my mind does NOT work that way), and then finally Python.

The one that I scratch my head about the most these days is JS. I put a lot of time and effort into it before it was great and now that it is taking over the entire planet, I just can not get excited about going back to it.

For work these daze I mostly "code" in PowerPoint (HATE IT!) but I hack on Python whenever I get the chance. I found it late but I LOVE working in it.... It fits inside my brain (a decidedly low RAM environment).

Phuket2

Lol, almost very similar experiences. I didn't work with COBOL. But did some db programming on mini mainframes. I think my first try at programming was on a Apple IIE using turtle graphics. I got a pineapple or something like that and had a cpm card in it. Back then it gave you base, wordstar, I can't remember the name of the spread sheet. But, VisiCalc was the most famous spreadsheet anyway. I also did HyperTalk and supertalk, they were great because you could write XCMD's and XFCN's or something like that (complied c code as a resource). My main asm exp was just debugging Motorola 68xxx chipsets. I did a course at Apple in Cupertino on year when I was attending the WWDC, back then it was in San Jose.
I got a little taste of c++, but it was about that time I sold my soul to the devil and went into management. I really did sell out, I moved Because if the money, not passion. But in the end, I was very passionate about being a manager. I was a manager at many levels. I think I was good at it, mainly because I loved inspiring my staff and seeing them grow both personally and professionally, and I took the full responsibility of being a manager. I never deflected my failures to my staff. If my staff failed, it was my failure, not theirs, I took my responsibility very serious.

Any, we have to have a drink together sometime ccc. I am sure if we go into detail we have a lot more in common regarding the computer industry

Oh, I hated AppleScript and still do, but it's even worse when you are doing it in c

ccc

You and your C# guy should study SQLAlchemy together. It is built into Pythonista and I am told that database folks (I am NOT one of them!) fall in love with it. If you get him into SQLAlchemy then he will probably be hacking in Python by the end of the year and in 2016 he will stop writing C# altogether. Just think of how liberating that would be!!

Phuket2

@ccc , wow. I didn't know sqlalchemy Was there. I am not sure when it turn up. Maybe last time I tried was with 1.5. I can't remember. But I have read about sqlalchemy it looks great. I just tried import sqlalchemy and sure enough it works.

It's exciting. But I will continue to do it the hard way for the moment. Only because I am learning a lot. All Libs are great, but as I mentioned before it's still,nice to know what going on under the hood. But I am happy it's there. Hopefully the experience I get with SQLite will help me a lot when moving to a ORM

MartinPacker

Yes I can still do BAL. Have 2 facsimile S/360 Reference cards in a frame on my home office wall.

Someone once thought they were threatening me by saying "you now have to learn xxx" to which my response was something like "oh that'll be about the 20th language I will've become familiar with". :-)

Phuket2

@MartinPacker , it's slow learning for me now. Been so many years since I was doing it commercially, and I am getting older and I drink too much. But I am retired now, so I am allowed to 😇

Have to say, really loved c. Once you are in the zone with c, it's very nice I think. But same time, I often found it difficult to debug other programmers code. Just depending on their style. A lot of my c involved the Macintosh toolbox, so the better you knew that, the easier to read and debug the code. Knowing already in your head what api calls etc could move memory etc.

But i have to say, I am liking Python more. Python written in a very simple way is very verbose. But written correctly, is very compact. I really find it a buzz to learn it.
I have the nice guys here for pointing out the correct way of doing things.Yesterday, I seen exactly how lucky I am. I am on the "Python Programmers Community' Facebook group. I was horrified to see some of the advice offered to some questions. I mean, truely bad.
So I feel blessed to have be guided here by some very smart and people.