week1.pdfweek2.pdfweek3.pdfweek4.pdfweek5.pdfweek6.pdfweek7.pdfweek8.pdfweek9.pdfweek10.pdfweek11.pdfweek12.pdfweek13.pdf z week14.pdfweek15.pdfweek16.pdfweek17.pdfweek18.pdfweek19.pdfweek20.pdfweek21.pdfweek22.pdfweek23.pdf




Week23?~ VIVA SCRIPT + Slides

5 min issue, reference (wider and personal) 


    "The car shapes the built environment, cuts through the landscape, dominates the soundscape, is a key commodity in production and consumption. Despite this prominence the car, unlike for example information technology and its impact, has largely been ignored by sociology as a component of social being and social action in late modernity." Tim Dant Early 2000s
    
    Cars are taken for granted, and our society has been shaped for and by cars more and more and for so long that it’s hard to imagine a reality without them. They are seen almost as a right, as opposed to a commodity or luxury. But they are a commodity in many ways they are used, and there are often obvious alternatives. In recent decades the car market has come to a stall, no one is making money, but, people still depend on cars, therefore there is an obvious market need for new ideas.
    
    DRIVERLESS CARS
    
    There are many ways to approach Autonomous driving in considering it’s effects and causes, many of which I have concerned myself with in the past. Silicon valley, tech development and the market justifications used to sell untried ideas on the basis that if they sell, they are right of us.
    
    There are genuine industry concerns about the capabilities of driverless car technology, that it is not possible. There are concerns with culpability, legal battles over the who is responsible for what. And also, driverless cars can be seen to further social isolation due to technology. These are all problems I could generalise about the tech industry, start-up culture, and therefore a good way to examine my larger interests, with an object that is tangible and sacred to modern society, about the in intangibility of tech development, data, code.
    
    The trolley problem, one that applies to driverless cars. The utilitarian case is to make the decision and kill the one. But many people wouldn’t be able to bring themselves to intervene. This is intuition, that killing one person is wrong.
    
    There is the base dilemma here, that you could say the driverless car solves. The car effectively becomes the rails and the trolley and moves itself, for that is the logical route for the car to go and logic is the operative mechanism for any decision for the car, moral or non-moral. This alleviates any human to decide and to involve themselves in the killing of an innocent person.
    
    But in the context of society and our relationship to cars, and on the subject of culpability and responsibility, this is difficult. Is it Musk, is it his engineers, is it the company etc. If I hit someone on the road, I am responsible, but if a tesla FSD hits someone, a corporate body could be responsible(disputed of course by tesla and uber, big law suits etc.). Currently it tends to be someONE is culpable because of our ideas of ownership and property.
    
    Aristotle made the point that only voluntary actions will qualify for praise or blame, but couldn't the use or purchase of a driverless vehicle condone the actions of it? Blame perhaps lies with the conscious agent - or with institutions.
    
    Emphasis on single human responsibility is anthropocentric of course, could not a system or society be culpable?
    
    Cars will not necessarily be able to decide correctly between conflicting duties when it comes to owner/operator safety vs pedestrian safety. this takes human authority and intuition.
    
    Delegating this technology your judgement effectively lets a market-defined utilitarianist moral judgment decide what is right and wrong in these scenarios. and this is how the car is sold, a selling point being that you have less responsibility(moral convenience) but in this same moral system, all actions need a free human agent and that is definably YOU when you use or own one, more than the company because that human authority is too dilute, and far too powerful to be found culpable.
    
    The idea that the market defines the good is technological determinism(Or economical determinism), while these products are sold on a social constructivist basis that it will liberate and empower the user. Libertarians(I’m generalising tech-start-up culture to all be libertarians),  swear by social constructivism, that we are each free to influence the market but I believe this is where one of the main contradictions of silicon valley lies. These products are developed not to empower the individual but to be sold. Without a deterministic view (which would excuse much responsibility as societal or situational or corporate), libertarians(tech) hold people responsible for their actions, and if you take responsibility for a driverless car then you can be held accountable. 
    
    Terms of service, general public understanding of technology – people do not understand the technology they use, and Tesla’s answer to this is obviously a signature to sign away any responsibility it has and an “options and preferences” panel that determines the way the car drives faster or slower and safer. 
    
    (californian stop)
    
    In a normal car, you understand how to work it without knowing how it works, and moral responsibility is clearly defined in law based on the understanding that is expected of you. Driverless cars extend this ignorance further, in a few ways, not just moral.
    
    Your google Waymo hire drops you in front of a certain store, on a certain street, determined by an algorithm that is fed your data. What agency is left to you personally? Or is it all in the machine. Just like how agency over many aspects of human living is now governed by technology, your understanding of your real world location is effected by driverless cars. Market driven production and development tears down old non-profitable communities and lifestyles to construct new and profitable ones, ones that are made desirable through promises of convenience and advantage, promises of problems solved despite inevitable problems made. In the real world this is gentrification, in mechanics this is the automation paradox, and in tech development it is planned obsolescence. 
    
    Google maps and recommendations dictates business success and the only way to gain that is by engaging with it with the intention to make money. Metropolitan lifestyle.
    
    No locality, like with google determining website traffic, they put all places into the same place and access is determined though services and data driven algorithms. These systems are market driven and your decisions to come and go are influenced based on value conceivable by mathematical economics. If you’re fine with how google uses your data and the switching of control between human agency and computer economic systems, you can still be concerned over the destruction of locality, on a social and environmental level.
    
    This is not a liberating technology, it is a convenient one. It is capitalism finding ways to control and charge for access to something that we currently have but must navigate ourselves with our own time.
    To explain this, let us consider - social media replaces human interaction, and directly affects personal capabilities of social interaction. We are aware of the way we are perceived online, given the capabilities to determine exactly who we are, pre-determine how other people perceive you, despite reality, and driverless cars do the same thing. The convenience of driverless cars means that you lose any understanding of route, distance, skill, decisions and only understand the way you are perceived to be traveling. They also scrap any road social interaction, when driving a normal car you’re constantly communication and accommodating and challenging other road users, driverless cars are yet another social interaction gotten rid of. The driverless car as a consumer product, the convenience comes from nullified skill, real time decision making and local knowledge. despite not having these things, in a driverless tesla, you are still responsible for them. Like many technological devices, the authority of a human's moral decision making has been translated onto an operational object through decisions recorded in code and design. Tesla's try to avoid any trace-back of responsibility by a preferences and options menu, including the granny drive and Californian stop options. 
    
    Technological development in the modern market-economy (and anarcho-capitalistic libertarianism) is inherently ethically naturalistic. The idea of what is right and wrong is utilitarian, with ethical values being cognitive and factual, situated and virtuous, as opposed to emotivist or prescriptivist - it is natural moral law. If a product succeeds in the market economy, it means it is good, no need to argue, because even though some bullshit might be what is selling the product initially, the bullshit reasoning for the product is justified by the buying of it. There are many problems with statistical market moral justifications like this, one being that facts are not moral values and to award numerical success(for something to be good) depends on the setting of thresholds. Bentham's Utilitarianism ignores motives, rules and duties, if the outcome is measurably good than it is justified. (bentham and john stuart mill)
    Do you agree that morality is so objective? The kind of moral discussion around cars tends to involve intuitionism and the idea that moral right and wrong does not come from evaluating result or logical argument but moral intuitions. Often arguing that good cannot be defined. The key thing with intuitions is they are stand alone beliefs so moral judgments are self evident to those who hold them. This is moral-realism in the way that truths exist independent of persons. "murder is bad because its bad". ( G E moore)
    
    
    8 min experiments and development
    
    I spoke to Toby Breckon, a computer engineering professor from Durham University who spoke about the phd program he runs with Jaguar Landrover developing driverless technology. And I also spoke to Chris Linegar from Wayve, An autonomous driving tech company based in London. What I heard supported my assumed position on this technology. Linegar was hard selling, and talking about the technology as only a matter of time and the next obvious step for our fast-paced modern reality, interestingly not saying anything about drudgery or menial labour. while Breckon was far more reserved and still had doubts about this tech working at all in Britain, predicting public backlash from botchery if it is deployed any time soon and without V2V networking and infrastructural changes. 
    
    The Driverless RC Truck
    
    It works like this:
    It is an ordinary RC car. With motor, steering servo, and speed controller.
    I have a raspberry pi
    Servo driver
    Small camera
    The camera records what the car sees, the servo driver is how the Pi interfaces with the steering and speed controller.
    
    Using my laptop, I drive the car around the track I made in the studio, as it records pictures and my inputs from the controller. 
    Using an OpenCV keras modelling software kit, a model is built, and the car can then drive it’s-self around the track(theoretically). 
    
    The car is nothing new, I got a lot of help and guidance from a few American forum users who have a club making and racing driverless rc cars. The car was a means to examine the technology and understand it. It gives me an authority to converse in the language of machine learning and artificial intelligence. I learnt a great many things from this exercise.
    
    The car uses keras modelling, which is a mimic machine learning method as opposed to affirmational, only using data from correct driving. The car takes a picture and records the drive data, then the modelling boils the images down into data which correlates to decisions.
    
    Behavioural modelling comes into it when I teach it to drive slower or faster on the same track, and I can toggle between the two of them on my laptop as it drives. (a problem being that it would crash even more if driving quick). I could talk for an age about the different nuances for different methods and techniques, but moving forward.
    
    The road built for the car – The car drives better the more work I put into the track I lay. This is the infrastructural point coming back. Should I improve my code logic or do I “improve” the road. There is a study to be done on the tension between different objects in this network to develop in accordance, affordance and accommodation of one another.
    
    Machine teacher – There are different ways of developing this tech; one way is like my RC car where the car drives directly based on my way of driving, another way is purely code and AI logic, where the car drives based on decisions and values set against recognisable objects in space, and then it’s behaviour is affirmed by humans when tested. Autonomous cars usually use a mix of both these.
    
    Who is in control?
    This is the point in which the tesla options panel and the argument over responsibility and culpability comes into question.
    
    “Purposeful action and intentionality are not properties of objects, but neither are they properties of humans either. They are properties of institutions, apparatuses, of what Foucault called dispositifs. Only corporate bodies are able to absorb the proliferation of mediators, to regulate their expression, to redistribute skills, to force boxes to blacken and close. . . . Boeing 747s do not fly, airlines fly.” Get Latoured 
    
    
    My exercise in RC autonomous driving showed me that, I, its creator was the one responsible for it driving and the way in which it did. But I would teach the car and when it drove it was only doing what I had taught it. When watching it go, I would not think “oh my, I’m really bad at driving”, I thought more of it as a child I had failed to raise properly and there isn’t any more point trying. I had responsibility and no immediate agency over it. If I hand my friend my RC car and say, this will drive great, and then it does what it does, then they will think “wow, Erik did a really shit job”.
    
    I wanted to have a go with the behavioural modelling side of it, and see if I could control the car using interactive hardware. If I am in a busy mood, I would flick a switch and make it go faster. 
    (All the knob and switches)
    Does this immediate option give anyone the agency they want over the car (like in the tesla panel)? There is the point around the façade of control that technology is designed to have, to give a sense of control over your use of it. People need to feel they can intervene in and customise their use and experience of technology, so this sense is designed in, without it nesacerily being existent. 
    I experimented with different switches, considering the control input they provide in different tech; flight grade switches(considering the film The Right Stuff where they beg for at least a sense of control over the spacecraft), and music equipment switches and knobs(this I liked because of their analog control over the elements they change). I spent a good while figuring out how they work and how to interface them with the raspberrypi.
    
    I began considering how I can label these switches. Because an action does not equate to all cause and effect, I didn’t want just “fast and slow and sharp turn and smooth turn”. I was interested in what I could make the labels say and what that would mean for you to flick it. 
    
    The trolley problem is an argument, a discussion, if you found yourself in that situation, no one would actually critique your decision or really blame you either way, it is a thought experiment. A change from driver to driverless is the lack of immediate agency, it allows you to consider and pre-determine these philosophies, which means these switches do not need to be labelled with immediate actions. 
    
    I draw up a grocery list for my wife. I write it down in sensible marks. These are the things we need - toilet paper, stuff for the kids, and then of course cash so we can buy fast food for supper; Standard way of shopping. I leave the grocery list and I drive away and I’m killed, utterly, ran over by a bus, flattened like a tortilla. My wife comes in and can my message function in my radical absence? Yes. she can still go to the store and buy food and maybe only later will they notice, where’s dad? And they hear on the news “overweight philosopher flattened by a truck, tortilla.” Rick Roderick on Derrida.
    
    The black box, is my answer to the Tesla options panel, an in vain attempt at capturing a personal philosophy that could drive in your absence. 
    
    A black box, 8 switches, each labelled with a basic car related moral dilemma or preference. 
    
    The trouble with the black box was fitting the whole dilemma on the label. I liked these labels, because they’re the same as the ones used by electricians and engineers when labelling power networks and such. A label for a switch which doesn’t even say what it does, but is a reference to its network, a name but not a function. Those labels are abbreviations, a language only sparkies know. Philosophy is the same, a language that is interpreted and misunderstood. I thought it fun for a switch to be labelled with its connotations before it’s mechanism, like a light switch being labelled with polar bears dying or something.
    
    The black box is this pseudo imprint of a person on a vehicle. 
    
    
    Pre-set Guitar Pedal
    What if the box came with an Elon pre-set, or a Jermey Clarkson pre-set? Guitar pedals are a filter for the electronic sound through a guitar, triggered and controlled by the pressing of the pedal.( a pedal because your hands are full, like in a car.) To cover a song, you will need to find the exact sound used by a musician and fiddle with the settings to match it… or you can find their preset.
    
    (folded steel box and “stomp box” switch, auxiliary input and output) 
    
    Again, a pseudo behavioural imprint of a person on the car’s behaviour, but under the authority of the implementor, button pressor, or owner-operator. When your press the FSD mode button in a tesla, this is a tangible realisation of who to consider is in control and who is responsible, with a recognisable face.
    
    
    5 min explaining final outcome(what it does, why and situate)
    
    What next? Well there were a few things
    
    -	The black box boils down the idea of a person’s potential pre-determined real-time philosophy into just 8 questions. (not enough questions)
    -	Those questions are just 8 or so words long. (not enough words)
    -	There is an absurdity for there to be a bunch of binary choices to function like this. (same thing as the promise of tech function vs reality(good point to further))
    
    I wanted more switches. Let’s say 100, On a big mirror.
    
    To comment on general understanding of technology, the way in which we are taking responsibility for dangerous objects without a full understanding of them, but this can be said for cars prior to autonomous tech. Can any road user be tested well for the highway code, the convoluted and drawling if buts and maybes of legal responsibilities and considerations? When we drive cars we understand human behaviour and roads function on the basis of a functioning society with codes and rules, but we only need to know enough of it to participate. This just about works in society, but there is another point in counter to this notion for educating the public on how tech works and their responsibilities, the one about things like terms and conditions, social media, information overload, we really can’t expect people to take it all in and therefore it’s hard to hand them responsibility.
    
    There was the idea to produce a manual for the switch board, one which was able to explain the switch, the meaning and the mechanism behind it, a fool-proof legal-like document inspired by terms of service, owner’s manuals, the highway code and warranty documents. Developing this took me a long while, since there was much to consider and experiment with.
    
    I started with lists 
    -	A list of moral dilemmas and ethical questions 
    -	A list of drivers, real, fictional, sentient cars
    -	A list of different types of motor vehicles
    -	A list of driving idioms 
    -	A list of car models
    
    “Robots could not fly sheriff’s patrol aircraft, not when many Sheriff’s Department duties had at least the potential for causing harm to humans.” ... “The joke of it was the Spacers had never gone in much for automating their equipment, because it was the robots who were going to operate it anyway.” ... “This made the manual job of flying far more complex.” Asimov
    
    Considering the options for who is driving my car, what car they’re driving and the mind they have are all good options for the switch board. 
    
    Considering the idea of the pedal, how each switch connotates differently based on different people who flick it, and a consideration of a philosophy external to your own, that of the operational technology, I have written the switch description; considering the way in which I situate myself when I have made the switch in relation to how you take responsibility for it when you flick or don’t flick it. I have made this switch, but I may have done it in consideration of how other people desire to authorise the behaviour of the machine. 
    
    Examples.
    
    The switches can be dilemmas or emotions, but they can also act as a way to perform functions in cars that people perform now, the board is a skeuomorph because if cars do become driverless they need massive infrastructure change and societal change. For worse or better.
    
    42
    Human override in emergency?
    
    
    A person has a fatal heart attack while driving a car, as a result of which the car is involved in an accident and kills someone. Since the person who had been driving was dead (or even simply unconscious at the wheel) at the moment when the accident took place, you might think they cannot be morally responsible for what happened, unless the person driving the car knew about the heart condition, and/or they had been warned by a doctor not to drive. However, what if the person was trying to get a desperately ill person to hospital? Would it be fine for the person to say to the ill person “I appreciate that without my assistance you are going to die but I really can’t rush you to the A and E because I have been advised by my doctor not to drive”? Would they be to blame for not driving against advice?
    
    The number and the direct question correlates to a switch on the board. The italic “description” is a convoluted and hard-to-consider dilemma. 
    
    47
    Accept speeding tickets as casual charges?
    
    
    Speeding tickets are common and many UK drivers have had one in their lifetime of driving. It is thought by many people to be ridiculously enforced and many warn other of any attempt to catch speeding before reaching monitored road. Speeding tickets are often thought more as an annoyance that any morally bad action.
    
    Your relationship with your dad is hard to explain to people, especially when they ask about it when they see you together. You call him by his first name, but you always have, and it seems normal. He’s done some bad things and has been away for a while because of it, but that doesn’t make him a bad person, right? Your mum often gets upset and you can’t stand it, she needs to toughen up. Dad didn’t do all those things just for her to get upset. You and her need to be strong for him to get through this, why does mum always need to make it about her? She’s so dramatic, she never does anything because she’s scared. Dad’s not scared though, he takes risks, and some times they pay off, though sometimes they don’t, obviously. But if mum was just more supportive maybe those things would’ve been OK. She must’ve been supportive when they first met or they wouldn’t be together, right? What changed when they had me? Dad says it’s all he knows and that he’s good at it, so why should he get a straight job? There aren’t any straight jobs anyway. Mum says the family can’t keep living like this but she doesn’t understand because she’s just at home all day while dad’s out taking the risks and making the money. She’s threatened to leave him, turn him in even, but you know she won’t because she’s scared of loosing you, that you’ll hate her. But you already hate her, she doesn’t know you sometimes sneak out and help dad. He’s showing you the ropes and you’re living up to his name, not sitting at home crying about it. you’ll never be like her, she’s weak.
    
    This may seem more confusing, but the idea is for the user to consider an actor and the intention of the switch and the way they think it will function. If this is the story of the programmer, or the implied intention behind the switch, what way will it influence the operations of the machine. 
    
    The switch board is to do with agency and understanding. If I leave the switch some-what ambiguous while also detailed and thoughtful, I can guarantee a thoughtful decision by the user. 
    
    Some of the switches afford things that I do not think possible especially for a technology, something only a person can afford another person perhaps. It promises to do something it can't do, or maybe doesn't promise because right now we can't ever see technology quantifying complex qualities like this. this is like the promises of autonomous driving, when this stuff meets real life society and humans. You might think it being intentionally complex is bad, but it is only complex because it counters the automation paradox of new technology. The more knobs and levers and buttons in a normal car, the more needs to be known and understood by the user, and the simpler the machine. The simpler the interface, the more complex the machine. The board is not complex but bring the complexity of the technology to the surface. Bringing the ingrained issues of a technology to the surface should allow us or the user to examine the relationship we have with technology.
    
    This is my concerns about what we want from tech development and what relationships we want to have, because progress can be naïve and the world doesn't exist as a sandbox for tech grifters to reinvent the horse as a camel.
    The switchboard is both an experiment in considering agency, and a critical object about the tech industry, and information technology. 
    
    The switch board is a diagram of this complexity of moral-decision-making interfacing with technology on a personal agency level. 
    It emphasises the pre-determining of real-time decisions. 
    It is tech vanity
    
    2 min making of final - looking back and expanding
    
    Coming to the end of this, I find myself not being sure if this is a project about cars, or notes, labels, and descriptions. The project dances between the ethics of service and tech design under capitalism, and the topic of personal and institutional agency in technology.  I can argue easily for the switchboard being a critical object, but to be honest, it could have gone way harder. I toyed with ideas to make an evil robot, one that would deliberately break trust, in order to make a point about trust in tech. I toyed with the idea of making an actual mechanism that the switches control, which would act as a game of pre-determination, decide all your actions prior to situations occurring and lock in, and observe doom. 
    That was all a bit on the nose for a start. But the main thing is that I don’t actually have a problem with driverless cars or think they can’t operate someday equal to the skill of a human, it might take great infrastructural changes, they might be a step into other transit technology that works, skeuomorphing into something more like trams maybe. My problem is with why - Service and tech design and development is centred around the production of value, and these ethics behind how we award what as good and what as right are built into the layering tech infrastructure we begin to depend on. The switchboard is a critical object in the sense it vainly tries to remedy the problems caused by this approach to development. It is operationally successful in demonstrating the agency and understanding you possess and forfeit to many actors, individual or institutional. 
    
    
    
    
    I suppose I want to say I am forcing an observation from the user with a kind of poetically absurd information overload.
    
    My board assumes a disconnected road network, rather than a central control system. Tony Breckon reckoned it will need v2v road infrastructure, which would take civil-institutional work to achieve. evidence that it is not a utopian coming but just another commodity.
    
    Isaac Asimov technology for technology, the automation paradox. Seen in power vs optimisation. Lack of context equates to the skeuomorph, lack of true design and automation paradox.
    
    
    The shelved topic of locality
    
    the future of roads and culture
    
    the inside vs outside competing technology. Elon musk does not believe in the metaverse because it would not sell teslas.
    
    This does not give an outcome, only considers our relationship with tech, but I could have a go at determining an effect - a dystopian fiction 
    
    Silicon valley centric, how capitalism always needs to expand and find new markets, forces this crap on to people who don’t want it and it wouldn’t even work for – roads in America vs roads in itally
    The locality aspect of technology. The fact that google maps and the idea of Waymo would destroy any non-profitable communities, it is the other project I wanted to do, only realising how both were some what connected at the end. The google waymo will function alongside that of google maps. There has already been a significant cultural shift with how people discover and explore their locality. Why should I ask my neighbour where a good place to shop or eat out is when I can google it. Only those businesses who pay a fee to show up first get this custom. They also tend to all look the same, trendy cafes with frame tables and heavy plank surfaces, serving coffee in ridiculously tiny cups and for far too much money. On google maps you will not find anything that is not profitable, and waymo furthers this as a possibility of non-paying businesses as being inaccessible in this system. Obviously this a capitalism problem, since anything that is not growing in value or anything that is not profitable is not relevant or worthwhile. It is only community engagement that keeps something like that alive, which is not something that can be monetized by google.
    Tech undermining democracy and lacking human authority.
    Not sure if this is a project about cars or labels.
    im 21, ive only read what sean hall told me to.
    
    autonomist tradition, whose work mainly focuses on the role of the media and information technology within post-industrial capitalism. Franco Berardi – stems from deleuze.
    
    Anti-trust
    drudgery
    tesla bot