Benefits

Telework Benefits

Benefits:

To environment:

- cut CO2 from commuting (about half of 25% = 12% of CO2 in North America)

- https://www.ec.gc.ca/indicateurs-indicators/default.asp?lang=en&n=F60DB708-1

-- 2014 breakout of CO2 emissions in Canada

To economy:

- increase GDP/capita:

-- increased workforce 'mobility':

--- larger cities have higher GDP/capita, and teleworking can help make smaller towns part of larger centers,

--- home ownership > more commuting for employment > higher opportunity costs for employee and employer > less net GDP

-- get more years of work / GDP out of older workers who shun long commutes and face-time games, and may have mobility constraints

-- with no unproductive commute time, workers can work longer hours and/or volunteer value creation

- increase workforce participation rates by smoothing transitions to/from leave, increasing caregiver participation

- reduce on-site injuries and worker's compensation claims by separating worker from work site

- refactor economy to welcome leveraging AI+robotics and low cost and specialized labor from around the world

- reduce hyper-demand for housing in mega-city cores while sustaining mega-city productivity

To worker:

- find the best career paths anywhere, without needing to leave your home town

- caregivers: work full time in career role via telework, while caregiving

- maximize lifetime savings by maximizing career earnings from anywhere, while owning an affordable dwelling outside city centres

- save time and commuting costs and hassles

- better balance home and work life, see kids off to school, start dinner while still working

- transitions smoothly to/from leave

To employer:

a) knowledge work

- recruit from a larger region / global population for top skills

- reduce wage costs as employees experience lower commuting costs

- improve attraction and retention by not splitting families, by facilitating a type of mobility

- use virtual sessions to train AI to supplement/replace human worker

b) dexterous work

- import low cost labor from lowest wage countries

- use teleworking sessions as training sessions for AI / robotics automation

- shift labor across timezones instantly to follow rush hours

- time-slice labor across task locations to leverage robotics, AI

- separate humans from work area for hazard, cleanliness

Telework Problems / Issues:

- monitoring and efficient wages

- face-to-face component

- dexterous task / physical component

- peer worker competition/zero-sum-games/Nash-compete-equilibriums with face-time for one-upmanship

- supervisor paper-trail avoidance for job security

- management distrust of teleworking

Solutions to problems:

- monitoring: refactor job from effort monitoring to output monitoring; increase training and efficient wage

- face-to-face: 2-window screen, one window shows remote worker, other window shows who remote worker is looking at

- dexterous task / physical: telerobot at work end

- peer worker competition: leveling with all-tele meetings; 3D avatars

- supervisor paper-trail avoidance: guaranteed not-stored audio/video/IM

- management distrust: identify concerns and address vigorously

Steps to Telework Implementation

1. convert job to information-dense

2. convert from effort monitoring to output/results monitoring

3. increase efficient wage with training / upgrading / value adding

4. security filter information

5. do remotely

6. capture sessions for AI (artificial intelligence) automation training sessions

Usage Profiles:

Dexterous work:

- low wage import to geographically fixed regular assembly line

- scale-up/scale-down dexterity

- clean-room: meat cutting, food preparation, silicon chips, medical samples, surgeries

- hazardous: chemical, bio-hazard sample handling

Hazardous work:

- SWAT, IED, MIL - walking telerobots. anti-jammable line-of-sight communications relay along chain of robots

- Firefighting

- Earthquake / mine rescue

- Electrical switchman

Knowledge work:

- sensitive information encoding: retinal scan during visual delivery

- economics: efficient wages / monitoring costs, incentives, career equivalent to on-site worker, promotable

- psychology / human-factors: equivalent to live, facial expressions, body language, tone of voice, meeting performance

- keyboard / touchscreen local inputs

Retail:

- reduce 40% downtime in retail by transporting workers across timezones following rush hours

- greeting and help: "Hello may I help you .... isle 4 - follow me..."

Technologies:

Processes

Worker side:

1. auto-detection of 1 person

2. auto-detection of screen privacy filter

Work side:

3. auto-redacting of monitors/screens unless explicitly permitted

Discussion:

Why do you commute physically to work?

1) because AI and robotics haven't been implemented yet

2) there are still some things that need humans, especially

a) security issues

b) judgement

c) effort monitoring, where results are hard to measure

d) dealing/interfacing with other humans:who are physically located

3) worker reluctance to telecommute:

- lose standing in internal competition for promotions

- higher risk of being laid off if 'unseen' / just a number

4) limits to virtual-presence technologies like skype and 3D virtual worlds ie second life and opensimulation

a) Security Issues

At the office, anyone could walk in on you, and if you're up to no good, you'd be fired immediately. When at home, no one is watching you. The employer may feel insecure about this, and dealing with it explicitly will help speed the transition.

Besides standard login and role/permission design/settings, on the home/worker side there should be not just a monitor privacy screen, but also a way to detect if that privacy screen is in place. And a way to tell if there's someone in the background looking over your shoulder. There should only be one person in front of the screen, unless a special permission is granted. A white wall / curtain behind the worker may help a tracking system monitor this. An infrared camera may help detect only one person/animal. A camera facing the screen -but offset- may detect if a privacy filter is in place (a hole in the filter would show a pattern).

If there's a rover robot with a camera, to go around the office / lab / hospital, then on that robot there may be some auto-redacting of other screens. A standard mark can be shown on the 4 corners of all screens in the building, so screen images can be auto-redacted. There can be a button on the robot, so an insider can give explicit permission / un-auto-redact.

Sensitive text can be rendered slightly different for each end user, so if a copy makes it into public domain the source of the leak can be determined, and consequences applied to discourage others.

Copy and paste of text can be blocked on the worker's computer, while allowing editing, via server-side rendering.

b) judgement > workers on-site have access to other people, information and work-site visuals. So judgement can be applied remotely if all of those are communicated.

c) effort monitoring - ideally roles are redesigned around measurable performance. But those measurable jobs are the first ones to be automated with AI, leaving hard-to-measure jobs for humans. So this issue needs to be dealt with. At the office/work-site, effort is monitored by seeing someone hunched over a computer with the right apps open - no solitaire. i) a head-tracking app on worker side ii) 3D rendering of head position on work side, perhaps in 'virtual 3D office', and something that reports which apps are open on a desktop.

d) dealing / interfacing with other humans

If you are a boss, and your employee smiles while telling you something you don't believe, you make squint at their face looking for micro-gestures or leaks that give you a bit more information: do they really believe it, or are they 'gaming' you. Its that sort of micro-gesture information you value on a daily basis for feedback and insight beyond the words and macro-gestures of people around you. How to get that without being physically co-located?

i) skype - but with upload compression you might miss something -a micro-gesture or head-gesture that was dropped from a few frames- and they stare straight at the screen so you don't know who they are looking at when talking if its a group session

ii) second life and opensimulation - 3D but both had a major weakness - no facial gestures on the avatars, but low bandwidth

iii) 3D avatar in 3D office with micro-gesture, eye-tracking, head tracking transmitted in real time - better, you can see who they are looking at, you get the micro-gestures, the gestures take less bandwidth than video streaming so wouldn't drop details or frames, and could run all day with only small land-line data charges

-- desktop tracking: a headset (earphones, microphone) + face camera + head pose tracker (gyro, inertial sensor)

-- mobile tracking: user holds phone so camera facing their face, and turns body (head, shoulders, arm) to look at a specific person in 3D room

3) worker reluctance to telecommute:

- relative standing/competition with other employees for promotions: employer holds all meetings virtually, even with those on-site.

- risk of layoff if unseen: employer sees all employees in virtual office

4) limits to virtual presence technologies - skype, 3D virtual offices

- skype: can't see head turning, or eyes just stare straight ahead > loss of eye and head gestures, can't tell who they are talking to, and charges add up so can't run continuously

- 3D virtual worlds > facial gestures aren't transmitted to / shown on avatars > loss of facial microgesture information

a) pannable skype equivalent, with head tracking

b) facial gesture tracking and head pose tracking for 3D virtual world micro-gesture representation, gestures chirped over internet, can run continuously at no extra charge.