Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Students have to jump through absurd hoops to use exam monitoring software (vice.com)
238 points by elsewhen on Nov 9, 2020 | hide | past | favorite | 233 comments


I feel so bad for the students that have to deal with this. I worked at a Community College in IT for a while. Students would come into a lab that was specifically set up for testing. The anti-cheating software was such total trash, it would crash, crash the browser, crash the whole system. Students would have to start testing over...

At least twice a month, their systems would be down, and nobody could take tests at all. We'd have to send home 30-60 students at a time, and reschedule their tests for another day. These were students who were also working jobs, and took time off of work, likely unpaid, to come in and take the test. So we wasted their time, and lost them money. Not to mention the constant waste of IT's time dealing with this trash software.

It's totally fucking criminal that these companies sell such garbage to our schools and get away with it. It doesn't matter how bad it is, it checks a box on some requirements sheet so we will never switch to something else... Not that the other options are any better.


Just last week I had to help my sister reinstall windows because she used respondus and now her computer blue screens every time she turns on her webcam. It runs as root and appears to mess with the kernel, so a clean reinstall seemed like the only solution. It’s really so much worse than I imagined at this point I’d classify this software as straight up malware.


This is one of the reasons I'm happy closed platforms like iOS and Windows 10S are getting more and more mainstream. They'll simply prevent these poorly written drivers from getting written in the first place.


In my uni they d just give stuff unique and almost impossible and just let us try and survive. They d read every code and ask for comment, sitting next to each student for 30 minutes of explanation.

Once they gave one of their research subjet on dictionary optimization and "the best they could do themselves" compiled to show it was possible. I went far enough (we had to also be constrained to 16-bit memory addressing) and decompiled their stuff with IDA to find a few more tricks and got a little bonus for telling them and reimplementing them.

Another time they told us "do whatever with OpenGL and we ll grade you relatively", ensued an enormous competition between students to beat each other with my team ending up implementing a counter strike map renderer with skybox, full texture, fast enough to walk around, and we got the best grade.

The last project, on the 5th year, was a semantic web video search engine a la youtube, with fetching, indexing, auto translation, knowledge graph, for a team of 40 students who had to split in various department with one team acting as product owners, for a few months.

I think there are ways to do exams that are meaningful and actually make you interesting in job interviews.


This could be describing plenty of enterprise software too. Boxticking crapware.


Sounds like we just need to push for an "uptime" checkbox, where uptime means tests can be taken.


What kind of cheat does it prevent?

I imagine it wouldn't be terribly hard to run the machines so that only one full-screen window is shown, or a user for whom the internet is disabled. Is that not enough?


> I imagine it wouldn't be terribly hard to run the machines so that only one full-screen window is shown, or a user for whom the internet is disabled. Is that not enough?

Many of the anti-cheating software will also run the webcam and yell at you if it thinks you aren't paying attention to only the screen, while recording everything on your screen.

And students are being forced to install this on their home machines.


> And students are being forced to install this on their home machines.

Wait, what if I work from home and my kid is sitting in the living room as well. I get recorded then? WTF???

Who owns these machines? Is it the parents or the school or the government or who? What if they decline? Can someone explain why a child has to accept that they are being watched on a webcam without their consent? Without the consent of their parent? And, "either accept it or don't get allowed at exam" isn't a choice. They ain't going to high school yet, but I don't envision I would allow my children to be scrutinized by this. Then again, who expected COVID-19. I can only hope this is temporary, but perhaps its part of 'the new normal'.


> Wait, what if I work from home and my kid is sitting in the living room as well. I get recorded then? WTF???

If the software detects that you're in the room your kid fails.


Then my kid would fail, or they'd have to use their bedroom. Which is, IMO, a very private space.


Too bad your opinion doesn’t matter. That is how this stuff works, you are not the first to complain, they’ve heard every argument you can come up with before and they don’t care.


While I am not from USA, supposedly in USA people also care about privacy. My opinion as a parent matters in the context of my child (as I am legally responsible), and in the end what matters is the law (which is above "don't care"). I'm very curious if this is even legal, and if so, in which jurisdictions. If I don't give my child permission to have a camera active in my private space, what is going to happen?


> If I don't give my child permission to have a camera active in my private space, what is going to happen?

Your child will not be allowed to take the exam.


I am curious about people like you. We know the outcome answer they want, but someone like you chimes in an claims it can't be been done.

Can you explain this defeatist attitude to me and what purpose it does providing the already known negative outcome?

To me, the language you used suggested that you are already bought, sold, and controlled.


Unless you have enough money/lawyers to fight it in court, there really isn't much option.

You take the tests, with the software, or you fail.


What would you do in that situation if it were you. I would personally choose to fail and not be apart of a system that does this, but it sounds like you embrace it?


Nobody is suggesting the system is good.

Your question is about the current process though, and the answer is if you really do not have access to a room with internet that you can use for a couple of hours, the best thing you can do is to contact your school and see what arrangements can be made to accommodate you.


I actually have gone out of my way to, and have mostly successfully avoided that situations, and have gone out of my way to point out to schools the damage they are doing.

However, not everyone has that option, is willing to take that option, or understands the ramifications of not taking that option.

I hate the current system. I also understand why people choose to follow it.

Edit 0: It looks like maybe this conversation has hit the maximum depth for HN? If you'd like to continue talking, please feel free to email me.

Edit 1: Turns out I was wrong. The reply timer just hadn't passed. Still feel free to email me if you like.


To call it "embrace" is unfair to the parent comment. People get coerced to accept things all the time, especially when it concerns the future of their children. Very few people have an "all or nothing" kind of mentality when it comes to resisting establishment, especially when the cost of accepting is relatively minor. In this case, accepting the anti-cheat has a very low cost: enduring a crapware and spyware for a little while.

It's simply a compromise.


> What kind of cheat does it prevent?

Not much. When lockdowns first started I remember seeing _tons_ of first and second hand accounts of all the ways anti-cheating software failed when exposed to the real world.

> I imagine it wouldn't be terribly hard to run the machines so that only one full-screen window is shown, or a user for whom the internet is disabled. Is that not enough?

It can be a little tricky. We put the test in a single full-screen window on top of any other windows, and...the students alter the transparency to view their notes behind it, or they notice that we only check windows in the current desktop or for the current active user. Maybe "full-screen" on the system in question only fills a single monitor and leaves another open for research.

All of that assumes that the student is only using a single device and doesn't have a hi-def pet silicon rock in their pockets at all times, hence the webcam monitoring and whatnot to check that a student is actively staring at the screen the whole time.


I'd fail if I had to go through this. When thinking hard about a problem, I like to clear my mind of visual distractions by looking out at the sky and if I'm not near a window, simply close my eyes.


Next step: Webcam device which simulates student watching the screen :)


This already exists for anyone sufficiently motivated. It's a video input playing back a prerecorded video showing absolute attention, with the device IDs coded to pretend that it's a video camera rather than a generic video input (e.g. HDMI input)


Now I come to think of it... Reminds of when I was in middle school, we used a bug in Chinese IME to crash the software used to control our machines.

The webcam solution also seems entirely bypassable if the student is determined. (Speaking from my experience as a TA, students who cheat are usually pretty determined) Just put a phone to the level of the monitor, and there's nothing a webcam can do.


Taking the test with shades is legal is it not?


At my university in Russia, we had a special testing classroom with these weird Sun thin clients that only ran Firefox in a very minimalistic window manager in a unix system of some sort. Said Firefox would only open the website of the testing system.

So, obviously, everyone just used their phones for cheating lol.


Can't they have a KVM switch connected to a second computer that could cheat while seemingly appearing like one is still using the test computer?


I just went through this kind of experience on Friday, taking my Oregon real estate license exam (I passed, now will be dual licensed in WA and OR!).

The having to scan the room, scan my ears, scan my "desk" (or in this case my crotch because I was using a laptop) was all very silly, but didn't really bother me. The worst parts were all very human - they sent me the wrong link for the test, shared incorrect information about the allowed materials (IE: scratch paper, a 4 function calculator, etc), and then I waited for more than 20 minutes for the proctor to finally show up. Then of course there's the problem that the test provider has wonky weird software, including forcing you to install their "secure" test software. Which then yells at you for having a browser open that it necessitated to start the install process.

I guess what I'm really saying is - I find it funny that this is deemed "secure" when really I suspect most people taking a test have the same emotions as myself - we just wanna get to it and get it done.


Congrats on passing, by the way. Nice work.


To me, this underscores notion that schools continue to test for the wrong thing. With the exception of professions/trades that require impulse application of knowledge, I don't know why I would want/need someone to memorize a concept in order to apply it. Productive members of our society/workforce think critically, they ask good questions, and they research and leverage the information at their disposal to make data-driven decisions. Let's test for that.


I teach programming. What prevents students from colluding over their tablets/mobiles/laptops while taking the exam from another device? Can't the student mirror the screen elsewhere, leading to interesting possibilities for cheating?

Proctoring is not a solution that I am comfortable with. I do not want to peer into the private lives of students and their home environments. Not every student is well-off, and has a private space all to their own for 3 hours.

I think taking tests from home does not really work with any of the models I have seen discussed this year. Cheating is real. It has nothing to do with rote learning. I am out of ideas which are foolproof.


The main problem, of course, is that receiving a credential and receiving an education are very different things that we do at the same time out of tradition and convenience. When testing was easy, it made sense to lump it in with teaching, but now that testing is impossible, I say we ditch it for the time being.


Could you make your class project based and give everyone different, but equally difficult, projects of their choosing. Each check in into master would then be merged after a code review with you where you ask questions about why implementations are done specific ways to solve the problem/refactor. Code is only merged when it hits a quality marker you set. Students are graded by percentage of deliverable hit from original project specs.

Ex: student is to implement a K&R C compiler + code generator in 1 semester. Help them break up the project into manageable chunks (tokenize, AST, register allocation, etc). Each feature includes tests, comments, docs, and is presented to you by X date. Code review and cycle as many times as needed as long as it gets in by X date. By the end of the class the student hit 100% of their deadlines but only pass 70% of some test C files you create so they get a 70% (or higher if their code was consistently good and did not need cleanup) in the class or something.

It would never work because "you need to have a final exam" but it's the ideal I think we should strive for if you are "teaching programming" since this is how most work I've done in the real world has gone.


This will never work because it absolutely fails to scale, at a time when we have a complete lack of computing educators. Customized projects for each student? I have 159 students in my CS1 this semester, and I can barely keep track of them with the massive amounts of data I collect. The idea of making individualized assignments is beautiful but unrealistic.


Most code reviews I've done at work take between 5 minutes to 10 minutes. To assume the worst possible case of 200 students, 20 deliverable per student, and a 10 minutes meeting per MR you'd be at ~28 days (~84 8hr-days). Get 1 TA and you now have 42 days of the semester dedicated to 1:1 time (35%) and the remainder of the semester dedicated to producing learning content. Learning content (lecture videos, etc) can be produced ahead of time and in this case you can replace this with more 1:1 tutoring time effectively allowing you to give each student ~10hr (10 min * 20 sessions * 1/.35) direct 1:1 time over the course.

From the students perspective all they need to do is:

1. Write a "project proposal" where they define a project, define goals, schedule a checkin with you to see if that project is large enough (1/20th the grade of the class)

2. Watch lecture videos and go through exercises on their own time.

3. Write code and bring it in to teacher. (18/20th of their grade)

4. Demo their finished project to professor (1/20th their grade)

This does work for later classes. I never had the ability to test this workflow out for earlier (100-level) classes as all of those classes I TA'd for followed the same model you're talking about.

The 200+ level classes that followed the model I'm talking about had ~10% to 30% pass rate which was in line (~2x) with the pass rate of similarly leveled courses from our college's Math and Physics department which was used as a sanity check.

edit: that's not to say you're incorrect that it would be difficult to do at scale, this is just the way I've seen it done at moderate scale (20 to 40 students).


It's easy (even a bit fun) to grade and give feedback for good students. Meanwhile it's a huge pain to do for the weak students. You need to first figure out what they heck they're trying to do before you can tell them how to fix it, which may take a lot. I suspect code reviews for professionals is much more like the good student category.

If you're asking the students to define unique projects themselves, you run into new problems as students dig up the most obscure blogs they can find on the internet to download and give you. It's a never-ending adversarial struggle.


You'll be surprised at what "weak" code you'll see in the real world and have to explain to someone it's not maintainable or clean or whatever.

This is especially true of junior data scientists (in my anecdotal experience so far)


Code reviews I have done for unrelated bits of code often take me 90 minutes. And that's for code that fits into the idea of my product space, but not super familiar to my everyday life. I don't think 5-10 minutes is anywhere close to reasonable.


The hard part is giving the student meaningful feedback. Especially to those who don't understand something important. It takes time to understand what they do and don't understand, and explain it at their level.


I teach classes of ~500 students and classes of ~100 students.

While having individual projects is very nice and it works for smaller classes, it does not scale to average students.

I have once used individual projects to grade the ~100-student class and the amount of effort it takes is unbelievable.

For example, just your number "1." point, write a project proposal, is not so easy. It takes hours to write a good proposal and several back-and-forths between instructor and student to agree on a reasonable (not too advanced/not too basic) project.


10 to 30% pass rate? And that was very high compared to other departments?

Do you mind sharing roughly where this was (country and quality of university)? I thought we were in an era of grade inflation where everyone gets an A.


This was course a 300-level test course compared to a 300-level test course from these other departments. The hope is that when they become "real" courses the pass rate is higher. I think this counts people who are re-taking the course or who dropped it as failing. We had ~50% of our class drop within the first month.

Part of it was related to their expectations of the course: "you mean I have to write code and the class is based off of that and not memorizing test answers? I'm out"


I think it can be automated in the more basic courses at least. But wouldn't solve the problem of students getting someone else to do the project for them anyway.


So much this: cs61a at Berkeley for example has almost 2000 students in a single class, the idea of personalized anything with that many students is total unfeasible.


I thought in the US people paid money for going to university, so that classes are well-funded enough... /sarcasm


Berkeley is a public university, so many students qualify for federal aid and wind up paying very little or nothing at all,the full cost of in state tuition to those who can afford it is about $6000 per semester.


This is ridiculous. Why do Americans pay thousands of dollars for a bit fancier version of MOOC? When I studied we had 200 people on a course sitting on lectures, 30 on practical sessions and everyone could have a personal session during the professor’s office hours.

There is some kind of a massive waste of money going on.


I'm genuinely curious to learn your option on cheating - does it actually matter?

What I mean is - certainly you don't want to harm the reputation of your school, and you want your students to have actually learned the material and be prepared for whatever they do in life that requires that knowledge... but what I'm wondering is... if there's presumably some tiny percentage of a class that is going to actually cheat or act in ethically questionable ways, are we in fact putting more effort into the prevention of that cheating than is warranted?


One third of the students on my last exam cheated (at least, these were the ones that gave an identical answer to one of the problems, including identical minor typos). I was using the tools described in the article, but apparently I was being too lax about enforcing the 360 degree views or something as they clearly colluded to find a way to share answers anyway.


I feel like this hits the real problem, figure out a better way to prevent cheating and a better cheat will be made.

It should instead be worked on making cheat resistant material

(unfortunately I don't know how to do it but I feel like the later is where the energy/brainstorming should be spent)


> presumably some tiny percentage of a class that is going to actually cheat or act in ethically questionable ways

According to the International Centre for Academic Integrity, from 71,000+ US students surveyed between 2002 and 2015 at least 68% of students admitted cheating in some way.

https://www.academicintegrity.org/statistics/


Other 32% lied /s


Of course it matters. In a class where cheating is permitted, the students who choose to cheat will perform better than they otherwise would have. That will give them an advantage over the students who choose to not cheat.

I personally experienced this in undergraduate with a professor who chose to ignore obvious cheating and I think it negatively impacted the motivation of students to learn the material.


Probably out of scope due to time constrains, but having students explain solutions and code should give a glimpse about how well they understood the principle. So if you have some time to kill...


How about actually treating them like respectable adults and using the honor system? If you start treating people like dogs, don't be surprised when they bite you. Woof.

Stop giving bullshit tests with bullshit restrictions, which is usually what drives people to cheat in the first place. Have you considered asking: what can I change so my students are able to successfully learn the material such that they won't feel compelled to cheat? Recognize that cheating students are a reflection of your apathetic educational strategy and ineffective testing model.

Here's an example of a bullshit test which exemplifies a few of the issues with apathetic educators: single-try tests. Students are only given a single attempt to demonstrate they've learned the material, with the result ultimately reflected in the final grade. The apathetic educator is unbothered by having failed to adequately prepare his students, despite his knowledge of their inability to accurately self-asses in an unknown topic. A caring educator would provide a way for students to diagnose and fix any gaps in their understanding in addition to as a path for them to raise their grade to an A if they demonstrate an understanding of all relevant course topics.


Memorization is absolutely crucial to getting good at math. In fact, I don't think there's any other way to get good at math besides memorization and doing problems, over and over and over again. It's rote, sure, but it has tremendous value.

If you think you understand something, but can't actually solve problems without referencing anything, you don't actually understand it.


I have a degree in math and I strongly disagree with this comment. I was always terrible at memorizing things, and I still regularly reach for Google to double-check formulas and theorems I should know well. In practice, I think a lot of math people end up memorizing things the same way programmers memorize the syntax and tools of their preferred languages; it's not necessarily something you explicitly set out to memorize, but you use them so often that you end up internalizing them anyway.


Not going to argue math, but that final statement is a poor way to frame understanding/learning. You can understand something well enough to solve it from memory a few times when in close proximity to the time you learnt it. On the other hand you can understand something well enough to know how to check the references and solve it for the rest of your life.

That is it is better to understand how to solve problems with references than without. You'll forget most you know but once recorded you can't forget. Then you just need to know it exists so you can find it.


This is absolutely backwards.

No amount of reference sheets will help you with mathematics if you don't understand something.


I think you have backwards: you only understand something after grinding out problems and memorization.


Anecdote time. Never have I ever understood concepts through repetition alone. Grinding problems is for building patterns via using different input sets, not unlike ML. It is true that it "clicks" sometimes after a threshold and you actually understand it, but usually what grinding does is reinforce the algorithm without actually understanding it - e.g. surely you know that c squared equals a squared plus b squared, you've probably grinded that enough to memorize that equation. Can you prove it or at least explain why that equation is correct? Because i can't, not even after repeating it a million times. That's the difference between memorization and understanding.


> Memorization is absolutely crucial to getting good at math.

I have a terrible memory and I excel at math (and majored in physics in undergrad) precisely because it doesn’t require memorization.

I don’t know anyone who is good at math who operates by memorization.


I think this is only half true. Of course, if you want to be good at maths, you will have to remember things because you can't always look up or re-derive every smallest detail (and because you have to become good at pattern-matching at some point). But the memory tends to work best if it's activated by deliberate practice. For example, instead of memorising a proof line-by-line it's much better to try to remember the key points and then practicing filling in the gaps. If necessary, repeat this process several times. When I was studying for ODEs, I tried to re-derive many of the standard techniques by trying to remember which trick to apply instead of trying to memorise the formula.


^ This. For my Mechanical Engineering undergraduate courses, almost all of the formulas (models, actually) could be re-derived from linear approximations on infinitesimal elements. Memorize and internalize how the model is derived, practice re-deriving the model, and you can re-derive the formula during the exam, and do a bit of hand-wavy intuitive double-check of your answers. If you've only memorized the formula, you're going to have a very tough time coming up with an estimate against which to check your formula's answer.

On a side note, I really wish we had more emphasis on the conditions under which the linear approximations broke down. I remember sitting at the front of the MIT 2.002 class, and there was a demonstration of metal fatigue using a hydraulic press at the back of the classroom. Professor Sanjay Sarma stepped up to the front row in order to better see the demonstration at the back, and so I asked him about my intuitions about which way the model diverged from reality under vibration frequencies high enough that the quasistatic assumptions built into the model broke down. He looked to both sides of us and told the students on either side of me not to listen because they might get confused, and then we had a little discussion about conditions beyond which the model applied and which way the model's error went under those conditions. It was simultaneously one of the best and most disappointing moments in my education. It was an exciting discussion, but I was sad that the world beyond the linear approximations was considered to be likely too deep a rabbit hole for most of the class. Sanjay (as he preferred to be addressed) was an excellent educator, and I'm sure his judgement was based on past experience... each semester has a given complexity budget, and the field of Mechanical Engineering is so broad (statics, dynamics, thermodynamics, fluid dynamics, mechanisms, control theory/sensors/OpAmps, manufacturing techniques, design for mass manufacturing, numerical process control, destructive/non-destructive testing, etc., etc.) that undergraduates need to spread a limited complexity budget across so many subjects that they can each only be covered relatively shallowly.


I'm very curious about the type of math you have learned this way. Every math class was heavy on repeated exercises and memorization, but real understanding didn't come until outside of class (or after the semester) where reflecting on larger ideas lead to mental connections and analogies. I'll freely admit that I have to look up the integral/derivative of simple trigonometric functions every time, despite every attempt from the instructor to hammer those in during Calc II. Doing hundreds of integrals on homework and study guides kind of worked to learn those rules, but only imperfectly and temporarily. Doing hundreds of exercises over and over again has been even less useful for my later math classes. If some proof doesn't make sense, the next step is not to go through 100 examples, it is to break down the pieces into logical structures that are recognizable. All real understanding of math fundamentally works this way. (disclaimer, my math past calc/linear algebra has been in CS classes, so I'm definitely open to hearing how those from other backgrounds or those with more formal math education disagree with me)

If you think you understand some mathematical rule, but can only show 100 problems from memory, you don't actually understand it that well. If you can convincingly show why no counterexample exists, then you understand it in the strongest possible way.


Doing the same problem again and again does not do much to learn math. Doing various problems again and again is necessary, but that is not memorization. Because you are not actually remembering things.

Also, it was not that unusual for exercised based university math tests to allow references. Precisely because it does not matter all that much for difficult exercises, remembering everything is not the point.


Is there not an in-between? I used to be very good at maths, but a year out of practice and I might struggle at some topics. I still retain the intuitive understanding and that grants me a rapid path back to being able to solve problems.


The intuitive understanding that you still retain was only built up by repetition and memorization in the first place, no?


Absolutely not. I honestly can’t identify with your comments in the slightest. The only part of math that I ever learned by memorization was addition and multiplication tables in elementary school. Every subsequent revelation was based on pure understanding - zero repetition required. Up to what level of math have you studied? I honestly can’t imagine anyone thinking they’re good at undergrad-level math if they do it by memorization rather than understanding.


The great grandparent to your post (the one who initially brought up the necessity of rote learning) is a quant at a market maker.

Anecdotally, my favorite professor from undergrad (born in China, PhD from the best department in his field [my personal opinion]) said he thought the reason for Russian/Chinese dominance in certain areas of math was due to how those areas benefitted very much from rote practice. He advised all of us (American undergrads) to drill and kill certain techniques in order to build up our pattern matching.

I don’t think they’re advocating doing hundreds of worksheets on the power rule or trig substitutions or memorizing line by line proofs. Our brains do follow formal rules when doing math, but the insight necessary to find a way to solve a problem that isn’t straightforward isn’t through application of rules, it’s through a tacit intuition that you build up by doing lots of math. There is no other way.

It’s like how everyone feels like they understand physics to a PhD level while watching the Feynman lectures, but if you were to hand them any of the problems afterwards, what seemed like such a natural stream of thought is just simply out of reach. It’s much easier to go over something and declare “this makes sense” than it is to come up with that something in the first place.


+1 on the drilling specific techniques. Any skill is reinforced by consistent targeted practice, and to think that math is an exception where you can break through with pure genius is just deluding yourself.

The only way I got through my undergrad math was by doing problem set after problem set until the concepts were second nature, and the courses that didn't have a sufficient breadth of exercises to drive home fundamental concepts ended up being the ones I struggled with the most.


> break through with pure genius

Applying derivations that someone else invented doesn’t require “pure genius”; it only requires you to be able to follow the person who discovered it, which is a lot easier than finding it yourself.

> The only way I got through my undergrad math was by doing problem set after problem set

It sounds like you didn’t really understand what you were doing then. It’s hard to phrase this without just sounding like I’m bragging, but I never had to practice doing anything I actually understood. If I felt like I needed practice, that was a sure sign I didn’t get it, which I always tried to fix with careful thinking instead of repetitive memorization


> It’s much easier to go over something and declare “this makes sense” than it is to come up with that something in the first place.

Obviously, but that’s not what we were talking about. We were comparing memorization to understanding, not inventing to learning.

If you’re good at math, you should be able to re-derive any formula or procedure quickly (up to, say, constant factors) without having to memorize it (after the derivation has been explained to you).

If you run into a problem that you can’t solve because you didn’t drill the steps hard enough, you don’t actually understand the problem. This isn’t necessarily your fault - many math courses teach by symbolic manipulation without the conceptual grounding required to actually re-derive the symbolic procedures yourself. Few students will seek that understanding on their own outside of class, in which case they’re stuck with memorization.

> the reason for Russian/Chinese dominance in certain areas of math was due to how those areas benefitted very much from rote practice.

I think this supports my point - these “certain areas” are small. There is a relative paucity of mathematical/physical innovation from China (especially per capita!). The west still dominates mathematical invention.


Repetition to wire up thought processes and logic (why do X instead of Y) != Arbitrary memorization of trivia (X formula or Y proof)

The difference between learning X and understanding X


Training the common tools and pattern recognition for things algebra. At higher level you really need to have a repertoire of various tricks to apply to integrals and derivatives... I never properly learned these for University level.

Often the problems had some trick that you had to apply to solve it. And without knowing those certain tricks from routine finding right solution was pretty hard.


I had a math teacher that explained math in two categories.

Either you will understand permutations and combinations almost instantly, or you will have to do tons of examples.

Likely not applicable to all kinds of math but relatable to some concepts for sure


Calculations are based on memory but everything else not so much in my opinion. Sure, some methods for solving common problems are memorized


while this sounds good on the surface, i think memorization is much more important than you give it credit. imo you absolutely should have key concepts memorized if you have any real understanding of a subject. and as you become more of a specialist, your bar for what constitutes a "key concept" should raise.

Being able to search things up is well and good, but dont you run into situations where you don't even know what to search for in the first place?


You can’t get intelligently conversant about a thing without having the key bits in your mind, you’re right. But you don’t need to have as much memorized as we’re were tested on when I was in school (90s-early 2000s). And you need a lot more than just memorization to reach real understanding.

Every few years I need the quadratic formula for something, and I just derive the thing instead of remembering or looking it up. I’ve essentially traded some memorization for some understanding. We’re surely going to do it wrong, so I’d err on the side of too much understanding and too little memorizing. If you have a real feel for how a thing works, it’ll stick with you longer than the date of such and such battle.


> and I just derive the thing instead of remembering or looking it up.

Most people can't do this or have a very hard time learning much easier things.

I find that the crowd that talks about "pointless learning/testing/education" are often either ones that struggled mightily and were never really that smart, or they are so smart that they are above it.


That’s a bad example I guess. For something I struggled more with, take history. When I was in school it had a huge focus on trivia like dates, while I’m now fascinated by it, focusing on the cause and effect of things. But you can’t easily assess someone’s understanding of that side of things in a standardized manner, so it gets short shrift. Success is defined by the factors that least matter, and students, wanting to spend their efforts efficiently, will focus on the measures that define success.

As for the “most people can’t derive the quadratic formula,” you might be right about my blind spots, but I think it’s equally likely that I’m right and have the necessary point of view to see that most people can’t do it because it’s taught and tested poorly. Both explanations would equally explain it being easier for me to derive the formula than memorize it.


History is certainly a better example, but dates DO matter - You can't talk about cause and effect if you don't know when things happened. Now perhaps the minutiae isn't quite as important - but what better way to learn the chronology than knowing the dates? It's fundamental to it.

> to see that most people can’t do it because it’s taught and tested poorly.

Or they just don't want to learn it. Some people don't like math. Or maybe people are lazy - they like it but can't overcome procrastination to really learn it. Or they are more focused on something else, like I was as an adolescent (computers.) There are certainly people who got sick, or went on vacation during that week of school, etc.

My point is, we always like to blame things that aren't actionable, i.e. "the system." It was just "taught poorly." There are certainly cases of that being the truth, but if you look at the time constraints and all other details, it's hard to just blame the system. How do you actually fix the system?


> There are certainly cases of that being the truth, but if you look at the time constraints and all other details, it's hard to just blame the system. How do you actually fix the system?

Yeah, given all the constraints, I agree completely. My complaint is that there is a giant emphasis on testing in really scalable manners that take people and dialogue out of it. So which constraint is the system (the way things are taught and tested) most sensitive to? I expect that it's teacher to student ratios, which my thesaurus says is a synonym for money, simultaneously the easiest and hardest constraint to change :-(


I guess then the point becomes - in the age of computers, what's the value in memorising the quadratic formula without understanding where it comes from?

It's much more important to know that there is a quadratic formula, or more fundamentally that every quadratic equation has 0-2 roots (exactly 2 if dealing with complex numbers) and what the different cases loom like (does the parabola touch or intersect the x-axis?). It's typically more important to be able to solve a quadratic equation through guessing, factoring or completing the square. Because these things teach you something about how mathematics works, regurgitating some formula serves no purpose, you can just ask Wolfram|Alpha instead. And even if you can't complete the square etc., I'd much rather people understood the conceptual side of it instead of remembering formulas.


I don't think it has anything to do with computers. Tests have had formula sheets for decades, the quadratic formula would typically be on there.

The importance has always been "how its applied." Tons of people failed classes despite knowing the formula. It has always been about the basic application.

I don't think many math tests consisted of "write down the quadratic formula" and that's it.


The point is more that before computers, you could at least make a somewhat reasonable claim that it's useful to know the formula in case you need to solve a quadratic - even if you don't understand the formula.

But nowadays, you can just feed the equation to Wolfram|Alpha (or Sage, or Mathematica, ...), so there is no point in blindly memorising formulas.

And I don't think I've ever had a formula sheet in one of my maths exams in school...


When you understand and are familiar with something you get the relevant facts in your head as a side effect. If you put them there deliberately, through the activity of memorization, that is not understanding.


From what i remember of uni, most of the tests i took were not testing if you memorized a concept, but if you could apply it.

Sure the situation was still extremely contrived, but it wasn't simply regurgitating facts


Same in university of applied science in switzerland.

For example you learn a depth first search and in the exam you have to store some information on each step to print it in the end.

You have to learn different proof schemes and use the easiest one for a given hypothesis. Sometimes you get with the wrong one in an endless recursion for example.


The problem is less that students can look up answers online, its that they could hand the laptop to someone experienced and have them complete the exam for the student.

An ideal situation would be allowing students to use online resources but not to communicate with other people during the exam.


My university in Germany uses this approach. We have to livestream ourselves from the side so the supervisor can see the whole desk, before each exam we have to authenticate with our Student ID so nobody else can write the test for us and we are allowed to use everything - except direct communication. This form of testing is called "Open Books" or "Open Material". Before the pandemic we were allowed to bring our books and even handwritten sheets to the exam. What I like about this is that you don't get points for just remembering things. On the other hand, this probably wouldn't be possible in subjects like economics where you have to remember a lot.

Edit: typo


Designing tests for open book seems to be a good way to

a) make cheating in effective, (no need to sneak in a formula sheet when everyone has it)

b) More like the real world, I always have google at my fingertips, it is just how efficiently I can use google and my existing knowledge to solve a problem


Thinking critically on the subject: it also prevents the problem of having someone take the test for you by feeding you information through an earpiece or chatbox on a browser tab. In places where testing fraud is rampant, this is a common form of fraud actually.

So even a well designed


Back in the good ol' days, you could just take the exam inside of a VM. The VM is locked down, but the host isn't.

Looks like from ProctorU's docs[1] that they got around to probing for the (blatantly obvious[2]) signatures of a VM and stopped that trick.

I wonder if those signatures exist in a Windows Sandbox[3] instance, or if you can detect/block the Enhanced RDP Session Windows leverages when running a Sandbox...

[1] Under "Not Supported" at https://support.proctoru.com/hc/en-us/articles/115011772748-...

[2] https://bannedit.github.io/Virtual-Machine-Detection-In-The-...

[3] https://docs.microsoft.com/en-us/windows/security/threat-pro...


Many of the onerous requirements mentioned in the article are about the test taker and the environment they are in, not hte computer envirionment.

A VM doesn't help you when your professor is requiring you to rearrange your room, set up mirrors, bash earplugs etc.


That's exactly it - all of the onerous requirements are explicitly not about the computer, because the professors are presuming the software has that covered.

Except higher education software is known more for milking it's market than for it's robustness and sophistication. The type of proctoring software used here was trivially bypassed only a few years ago, when the last of my friends were finishing up college. I haven't been exposed to the topic since the last of them graduated, and from ProctorU's help center it looks like the vendors have wised up to the VM trick (in some fashion). But it can still likely be subverted relatively easily, rendering all of those onerous environmental requirements moot since they all explicitly ignore the computer itself.

I sort of wish I had access to it, so I could probe around myself and write up a tutorial. Not that I'm an advocate of cheating, but more so as a big "F* You" to the entire draconian process.


I would record a video in advance or failing that flunk out. This calls for absolute rebellion against the system


The videos tend to be random, in both operations requested and their order.

If you can generate convincing real-time fake video based on verbal commands like “ok, now show me behind the desk… now your left ear” you probably don’t need to pass an exam any more.


My rebellious streak would implore me to make the process as awkward for the proctor as possible. Get creative.

Not that it would really help anything.

Tests, like software interviews, are a poor way to determine an individual's ability in anything but synthetic tasks. They are cheap, though.


I'm the kind of person who dropped out of college anyway, so it really doesn't matter. But I would have told any professor pulling this nonsense to get stuffed.

This is far less about cheating than petty power tripping. (They should, of course, feel free to abuse each other in what ever ways they find enjoyable. But this is really not my thing.)


This would almost certainly not work and you'd just get a 0 on the exam

Can I ask, in a totally non-confrontational way, are you successful in your field and used to getting what you want? Based off of nothing other than this post I would guess that is the case


Just get a monitor that supports Picture in Picture use two computers instead of one.


Or a HDMI switcher and switch between your main PC and an (easily hideable) Raspberry Pi.


Wow that’s creative.


I would be in favor of a VM that is indistinguishable from a non-VM host. Is there any project that helps bring such functionality to, say, VirtualBox?

Not because I encourage cheating, but because I should never have to install proprietary software on my personal machine to take an exam. I will however on occasion grudingly be okay with installing closed-source software inside of a VM.


Malware often tries to do the same "am I running in a VM" checks to evade researchers' attempts to analyse it, and of course people want VMs in general to become indistinguishable from physical machines, so you're in good company.


Looks like there's one here: https://github.com/hfiref0x/VBoxHardenedLoader/blob/master/B...

Alternately, it seems that QEMU on linux does a pretty good job as well.


Do you have any proprietary code running on your machine, or would you just not install any to take an exam?


I run Linux and I don't believe I have any closed-source code on my machine at the moment that's not inside of a VM except for NVIDIA drivers.

I usually use VMs if I need to run anything closed source.

You don't need to be entirely vegan about closed-source to care about this issue though; there is a notion of a person trusting certain companies' closed source code over others. That is a thing. For example I would probably trust something like Photoshop slightly more than some random exam company's outsourced exam app.


I wonder how feasible it is to remove such fingerprints - find a way to make the VM be functionally indistinguishable from the host.


This is an old problem for reverse engineers, and there are some solutions, for example VBoxHardenedLoader [0]. As per usual with such things, these tools are in cat/mouse category.

[0] https://github.com/hfiref0x/VBoxHardenedLoader


That's interesting - I would have guessed that after a certain point, a VM and a physical computer are indistinguishable from the POV of software running inside.


It's easy to make the core device indistinguishable. It's the additional hardware that becomes a problem: QEMU/libvirt hardcodes its hard drive model to include the word "QEMU" (although there is an unmerged patch to make that configurable), and although I don't know how graphics work in a VM, it's far easier to create a higher-level graphics device than e.g. emulating Intel graphics.


I imagine a script could be used with qemu - given that patch you mentioned, which I would love a source for if you have it - to match the names of the virtual devices with names of devices on the physical host. Then there's no way for software to check the device names against a list of VM tools, since it always matches real, physical hardware.

At that point, it seems to me that there isn't much left to distinguish the virtual machine from the physical. Behavioral properties of the CPU? Anyway, that's what I meant when I said it seems like after a certain point, it becomes impossible to tell the difference.

EDIT: based on the link above, it looks like current state-of-the-art doesn't go far beyond making sure names and common virtualbox performance shortcuts aren't present. https://github.com/hfiref0x/VBoxHardenedLoader/blob/master/B...


One of the other telltale signs of a VM are "things VMs can do that physical devices can't" - such as weird screen resolutions (when running in windowed mode rather than full screen), weird CPU core counts, physical ram values that aren't cleanly divisible into DIMM slots, etc.


Good point. I guess that would be relatively easy to check for programmatically.

What else is there that could possibly indicate virtualization? Available instruction sets? Strange limitations of the given CPU? (e.g., the cpu presents itself as some Intel chip that's known to have 4 cores, but there's only 2 available)


This is a classic computer science question of whether the OS should be aware that it's being virtualized.


I know of malware long ago that would attempt to time how long CPU instructions take, based on the theory that a VM would be significantly slower, but more recently, especially with hardware-assisted virtualisation, the differences have become close to indistinguishable.



Thanks!


The video game anti-cheat world has been playing this cat and mouse game for a while. A few months back I saw an interesting article [0] describing some of the detection methods being used. It's fascinating to think about how much effort gets put into this.

[0] https://secret.club/2020/04/13/how-anti-cheats-detect-system...


- You run into copyright issues, because unless you duplicate a commercial UEFI (SMBIOS and all that good stuff) within the VM you'll be detected.

- You run into performance issues - virtualization's speed depends on paravirtualized drivers, which are readily identifiable (e.g. "virtio" disk controller). These can be emulated instead of paravirtualized but will cost performance. Windows drivers are signed so you can't just change names of things.

- Chipsets - some are often emulated by VMs. For example, if you are running a system in 2020 and are using an Intel 440BX chipset, it's probably a VM.


For what it's worth, I play MMOs with malware anti cheats on a VM all the time with a Pci-pass-through setup. Event he ones that are known for detecting VMs don't seem to detect me under QEMU. Go figure.


There are certain tools often used for malware testing such as Cuckoo that can help conceal some of the more obvious tells.


You can also just duct tape a phone to your computer screen with your notes. Webcam can't see it, and it's right in your field of view when looking at the questions. Locked down browsers are dumb, no matter what you do there's a million ways to cheat easily.


I agree with you but this is trivially defeated with a mirror


If your VM is running on my computer, it is not locked down. I'm going to get in, and so will anyone else who really wants to.


Most of these applications simply look at your manufacturer if I'm remembering correctly (at the very most, it's one easily changeable property). You can generally get past these checks by setting any strings that reference VirtualBox or the likes to Dell.


The easiest one to check (from the browser) is the WebGL Renderer. I'm currently on a Mac running Windows 10 in a Parallels VM, and checking here[1] shows

  ANGLE (Parallels Display Adapter (WDDM) Direct3D11 vs_5_0 ps_5_0)
The "Parallels Display Adapter (WDDM)" seems to be coming from my Display Adapter name, which I could change. But I'm not sure if any other part of that string leaks my computer's status as a VM.

My non-standard screen resolution is another dead giveaway. There may be others as well, but those two seem to be the lowest hanging fruit. And many of these proctoring suites run as browser extensions, and I'm not sure what additional information extensions have access to.

But yea, they're not too hard to trick. You could get fancy, around listening for mouse clicks and keyboard taps, without detecting any corresponding actions within the VM. But that sounds like far too much effort for a higher education software vendor.

[1] https://browserleaks.com/webgl


Why not just run a livecd os.


Windows Sandbox doesn't fool exam monitoring software.


As bad as all of the proctoring software seems to be, I wish we had it. I'm a 1L at an American law school that's doing entirely unproctored exams. I didn't come to the state with any connections, nor am I friends with any upperclassmen. I already know I'm going to get fucking annihilated by people who are cheating with their 2L/3L buddies, because there's literally zero risk of getting caught. In light of how heavily your 1L grades affect your career prospects, it seems insane that they're going to basically sort people by their performance on unproctored take-home exams with no integrity whatsoever.


I am really sorry this is happening to you. The situation is deeply unfair, and it affects the fair and square overwhelmingly.

If that is any consolation, keep in mind that both you and they have a long way to go before graduation. Exams at your level are meant to assess whether students are ready for advanced classes. Cheating on them does little more than removing that safeguard, which can very well backfire. You're doing the right thing if you focus on learning the basics.


My only computer has Linux installed on it. I wouldn't be able to take an exam that required a windows computer.

If you make exams online, you should change the format to one that is better suited to people having access to the internet. They did that with our exams, and it worked perfectly fine, sidestepping the issues with these exams.


This is a good point.

My Organic Chemistry class had tests that were open-book, open-note, open-internet. Pretty much the only thing you couldn't do was talk to other students. Even if you tried to contact other students by the internet (which they didn't even try to police), there were several versions of each exam, so you were unlikely to find help.

They were still by far the hardest exams I took at University, because the questions were crafted such that you couldn't easily find the answers. Trying to look up a question in the book or online was more likely to waste test time than get you the answer.


That's the thing with ochem, if you don't know it right away it could take you 4 hours even with the book in front of you. 50% got me a B+ in that class with the curve. Humbling.


I had a teacher that solved this by saying all exams were open book, and then put so many questions that if you didnt know the material you'd just run out of time.

Basically you could look up a very low number of things in the amount of time per question allotted


Some people just work more slowly than others. Doesn't seem like a great solution.


Yeah, I have slow hand writing. In one class I took a midterm and got marks for every question I answered, but ended up with a low C because I couldn’t physically write fast enough to finish the test. I thought that was ridiculous, went to the office of disability and went though the process to get an accommodation. I got to use a laptop on the next test and got an A instead.

I’m sure there are some exceptions, but anything but the most generous time limitations in academic courses don’t seem to the improve the measurement of anything meaningful.


Some people are slower runners, that doesn't mean we should slow down everyone else on a ball field.


A person taking a test more slowly has no effect on the people who finish more quickly.


If I’m employing someone I don’t care if they look up a few things in google or ask for occasional help. If they take 2 days rather than 2 hours though, that’s a problem.


This is a bit of a topic a little too large for this thread, but there's definitely a philosophical and societal question if education is only, or even primarily for career. So there are many reasons why analogizing to a job would be insufficient for making decisions in education.


They completed the task faster, why shouldn't that matter?

Perhaps its unfair that they get no benefit for being better, no?


Most of my exams were like that in lower school and there was always enough time if you knew the material. It wasn’t open ended essay.

An exam that has too many people timing out just isn’t a well balanced exam.


And the real world penalizes people for that, for good reason. As such, I don't think there's anything wrong with time pressure on tests.


That's not what the test is supposed to measure.


Universities make accommodations for people who need to take more time.


You mean for people, like me, who know you can ask, feel no shame doing so, and believe it won’t hurt their reputation by doing so? Oh, and possibly pay for expensive third party testing.

It would be better to just provide tons of padding to everyone.


It won't hurt your reputation, its confidential information coordinated by your universities office for disability services (or some variant of that name). they even have private testing facilities if you are more comfortable in that environment rather than the lecture hall.


You are right, and it is a very good thing they do, but all that information is not something that at least as significant minority of students are going to know ahead of time. For those students, there is perceived stigma going to the administration for help, and that prevents people from asking for it, even if that perception is not reality.

Why make people ask for accommodations when we can just setup the infrastructure to make it unnecessary to begin with, at least in a wide variety of cases.


I remember one who actually said "I do not expect you to finish all the questions" and gave an exam with many questions, but to his surprise, a few students (very few - approximately 2 out of 300+) including me, managed to race through them all. The questions were extremely easy for those who knew the material, such that you'd spend more time writing the answer than thinking.

That sort of thing can't be done for all subject matter, however.


As someone who had slow "processing speed" growing up, that would suck.


If the students are being tested from home, you would still need to ensure that they weren't getting an expert friend to help them.


Makes me glad for things like the Stevens Honor System, which was how exams were administered at Stevens Institute of Technology.

Basically, all work submitted, whether tests or homework, require a pledge with a signature that the work is your own and you didn't cheat. Violations of the honor system -- cheating, plagiarism, etc. could be reported by students or faculty and would be investigated by the Honor Board with disciplinary sanctions for violators.

One upside to this was unproctored exams -- although if cheating became widespread, the Honor Board would threaten to bring proctors back! The idea in general was that students were expected to become professional engineers, who are expected to have integrity enough when working in the field not to deliver/sign off on stolen or slipshod work. Proctored exams keep everybody honest while the proctor is watching, but do not address the concerns of personal integrity when you might believe you can get away with cheating.


People are doing that at my institution. The average GPA increased by several points.

The thing is, if it's perceived that it's easier to cheat than to learn, people will do that, honor system or not.

I admit that there is a difference between an institutionalised honor system, designed with respect toward and in collaboration with students, and a last-minute arrangement by an unprepared organisation.

Students have been putting up with a lot of nonsense from universities this year. Their trust might have eroded. If universities are not perceived as honorable themselves, a honors system is not the answer here unfortunately.


I think the problem is pretty human.

(or maybe I'm exposing my weaker moral character than the rest of you upstanding citizens).

I think the vast majority of people who have entered some program of learning, want to 'learn' - and will try to.

Where it gets tricky is when you find yourself backed into a corner. Say you know (for whatever reason) you're 90% going to fail - do you accept that? Or do break out the Hail Mary cheat - what have you got to lose?

What's strange is that the moment you leave academia, suddenly this behaviour is acceptable. From the "Fake it 'til you make it" in acclaimed books, to the "carefully positioned demo that wireframes over the chasms to focus on the bit that exists - but you fully intend to fix the rest if the budget appears"

There's some collective cognitive dissonance that imagines all are equal, they go through the same trials of education (ignoring what people can afford, whether they have to work another job, they're commuting in from miles away and a thousand other variables) - but they can all be compared by their meritocratic grade at the end.

(before they enter the cess-pit of the working world).

I've reached the point where I've even lost my own point. Maybe it's just that the person who managed to cheat themselves to a pass against a barrage of technology, unveiled at the last moment, is a more rounded and useful person than the one that swallowed the book and scraped a pass.


>Say you know (for whatever reason) you're 90% going to fail - do you accept that? Or do break out the Hail Mary cheat - what have you got to lose?

This is almost the critical pass/fail decision in my opinion, because in the real world, on actually difficult or unsolved problems the correct decision is to escalate, clearly communicate you have an issue, and either be directed to someone who can get you unstuck, or inform resource allocators that additional resources will be required to fulfill their request.

There is nothing more destructive to organized, value producing work than to vouchsafe that it's a'ok hunky dory when it isn't.

I'm pretty sure I made it through some of my own coursework by the skin of my teeth because I could at least describe in enough detail where I was coming up short that the instructor had confidence that with more time and experience I'd get it.

At least, I think. I'm still a bit fuzzy on thoss last couple years of college from the sleep deprivation. I get bits and pieces of lucidity in between the recollections of "Oh God, I need sleep."

Theoretical Comp Sci is still "that one class I'm not sure if I really got anything good out of besides intuition into what is and is not a candidate for being computible", and Linear Algebra was "I've killed more trees writing out the work for these problems than anyone should reasonably have to."


> "I've killed more trees writing out the work for these problems than anyone should reasonably have to."

Lol... so true. Some procedures take forever. Like finding the eigenvectors of a 3x3 matrix, for example. First the eigenvalues calculation, then for each eigenvalue there is a nullspace problem. Some trees will be hurt for sure!


I still look back on those notebooks sometimes, and contemplate doing a refresher problem or two.

Then I realize I have no trees in my yard, and that I should correct that.


> Where it gets tricky is when you find yourself backed into a corner. Say you know (for whatever reason) you're 90% going to fail - do you accept that? Or do break out the Hail Mary cheat - what have you got to lose?

I wonder if the extreme cost of university in the USA worsens this? If education was cheaper they might lament the lost time as opposed to lost time and a lot more money.


Maybe part of the solution is to get rid of GPAs and switch to a pass-fail system.


> The average GPA increased by several points.

The credibility of your school went down by the same amount.

If cheating is rampant, either the culture is broken or you are selecting students wrong.


> The credibility of your school went down by the same amount.

That was the point I was trying to make.

This is not a problem of a single school, or a single university. This is happening across the sector. The success of contract cheating websites, and the inability of universities worldwide to deal with them, attest to that.

> If cheating is rampant, either the culture is broken or you are selecting students wrong.

Sure. People are willing to risk jail to get their kids into top universities. A kid whose family borrowed money to send them study in a Western institution might not see honor as their top priority. It is not a student selection problem: the culture is broken all the way down.

Lest I come across as pessimistic, I am not. Academics may not be able to fix the structural problems, but we have more control over our own little part of our world than in the private sector. I, for one, don't rely on exams anymore for the time being and have not seen a GPA increase in my course. A honor-based exam system is not something I rely on, however.


> A kid whose family borrowed money to send them study in a Western institution might not see honor as their top priority. It is not a student selection problem: the culture is broken all the way down.

I've heard of rampant cheating abroad, and it's part of the reason I believe some foreign degrees to be worthless. Truth is western institutions shouldn't admit from places where cheating is rampant. Else we're pretty much encouraging this broken culture.


Is there a school out there that doesn’t investigate and punish students who cheat or plagiarise? There are certainly differences in strictness, but I thought the “honor code” you describe was a basic policy at all institutes of higher education, regardless of the normal mode of assessment.


Yes, obviously they do. I was trying to emphasize that just because exams are unproctored at Stevens, doesn't mean that there are no systems in place for detecting and disciplining cheaters.

The honor system is also about expectations. Other universities expect you to cheat, and put extensive administrative controls in place to prevent that. The Stevens Honor System expects you to be honest, and to hold yourself and your peers accountable. So investigations and hearings for honor code violations are conducted by students, not staff. The administration doesn't get involved until it comes time for penalty enforcement.

So yeah, the end result is the same as any university: you can not expect it to bode well for your academic career if you cheat or plagiarize. But the structure is different, as is who ultimately bears responsibility for inquiring and investigating claims of academic dishonesty.


Online classes really emphasize how irrelevant closed book/notes/neighbor tests have always been. In cute terms, this environment is the strongest possible counterexample to "you won't always have a calculator with you!". All students are literally doing the test on a computer with internet access in the present tense. Their connectivity is considered so robust that a student may lose points or time if their internet drops during the test, and yet open internet access (or even just textbook and notes!) is somehow not an obvious resource to allow students during tests.

Professors frequently copy tests from other sources or previous semesters, and this obviously saves an enormous amount of effort, but ironically their copied work is what makes cheating on their tests so much more accessible. Professors make loud noises about cheating, and some are so hyperfixated on preventing cheating that they demand ridiculous requirements like proctoring software. They clearly aren't really prioritizing cheating prevention in their efforts, otherwise they'd write new tests more frequently (or at all!). Cheating prevention measures that don't require greater instructor effort are far more commonplace. When a professor literally copies a test, they shouldn't be surprised to see their students doing the same.

Tests that are suddenly ineffective with open internet access are already complete garbage for 3 reasons:

1. Some students will always cheat, and as long as teaching is remote, some students will always have internet access. Open access means that honesty is no longer discouraged.

2. Verifying granular details/vocab on the internet is completely normal in real work, even among experts.

3. Any test without resources like notes or internet relies far too heavily on memorization. This might be justifiable if there were evidence that standard test studying strategies (cramming) were effective at guaranteeing long term memorization. Anecdotally, I would fail the majority of my finals from previous semesters without time to (re)prepare. Students should be encouraged to understand the inner workings of the topic, in terms of analysis and potential states of that system, not to memorize dozens of details and vocabulary terms.

Within my Computer Science curriculum, the most effective and important content (between tests, projects, and lectures) has been programming projects by far. I'm dubious that tests are effective for any curriculum with relevant real world projects, though CS may be better than other subjects at making projects the primary measurement of understanding, and replacing the rigor that testing creates.


Yeah, if anything the first thing this makes me think is "how could I circumvent this?", on a test I otherwise would have just gone with and not even considered cheating. Maybe it's the hacker mind in me, or maybe these things actually induce more cheating than they prevent...


I think cheating in most science classes is still not easy. The killer is the short answer section. You would waste a lot of time searching for an answer, and if the professor made their own questions you will need to take some time to synthesize bits of information from here or there into an actual answer. Even referring to your notes or lecture materials will chew up valuable time. Sometimes even if you do know everything you are working till the bell.

But if you do have repeated questions, there is software that will basically grep the internet for you to catch plagerism. All the students know that going in, so copy and pasting is no go. Sharing answers can land you in trouble if two students don't rephrase their answers well enough.

All of this can get you kicked out of college and potentially ruin your life with that mark, not to mention the money spent. Still, people do get caught cheating with those risks even before the pandemic.


Jesus Christ, as a student, I am absolutely positive that I would spend 100% of my effort fucking with this software. This is “The lessons being taught are not the ones being learned” in extremis. Do we really want our best and brightest to have such practical education in subversive techno-syndicatetry?! I would be proud as a parent, but I’m also a bad parent.

Edit: my plan is to home school in a lumber mill / cnc machine shop. The strong will survive.


Folks are doing take-home exam wrong.

Caltech had tremendous success with a culture of honesty and take-home exams.


I spent a few years at Caltech in an academic role (not as a student). I never understood the rationale behind the unsupervised take-home exams. Could you elaborate what is it that makes it a 'success', in your opinion?

The shortcomings that seem obvious to me are: - Penalizes honesty - Implied notion that Caltech students are "honourable" and honest. How is this achieved in practice?


The idea is that you create questions that are ungoogleable. Most of my upper-level CS courses had take-homes that were open-book, open-internet, and the professor basically said “good luck finding the answer to this problem online; you will not”.

This is harder for professors to do, and maybe fails in intro courses, but generally is quite effective.


One of my Ph 1b (freshman special relativity and electrostatics) quizzes guided you through "discovering" a new method of solving that type of problem. I don't remember the specific details, but it's one of the rare tests I've taken that I actually enjoyed!

Although most homeworks and exams aren't nearly that cool IME, they're virtually all about synthesis and fundamental understanding rather than rote memorization. This "first principles" approach to most everything is one of my favorite things about Caltech.


When Carl Sagan was writing "Contact" he attended a conference that was also attended by his friend Kip Thorne. Sagan knew that one of Thorne's pet peeves was science fiction that just hand-waved away the physics of FTL travel.

Sagan told Thorne he was writing a science fiction novel, needed FTL travel, and asked Thorne if he could suggest something that would be reasonable. Thorne agreed to look into it.

After the conference, Thorne spent a while working on it and came up with a wormhole approach and worked out the physics of it.

In addition to giving it to Sagan, Thorne also put it on the Ph 236 (General Relativity) final exam. He didn't tell the students on the exam that it might imply FTL travel. He just set up the conditions and had them work out the physics. Most of his students succeeded in that, but he was a little disappointed that none of them happened to notice that it implied FTL travel.

(I got the above from an unpublished book Thorne was working on in the early '80s. It was a collection of biographies of and interviews with physicists, astronomers, cosmologists, etc., written and conducted by Thorne. He had the draft chapters and the raw interview transcripts in a world readable directory on the physic's departments VAX, where they were widely read by the rest of us with accounts on that machine).


Do you know if it's possible to get ahold of a copy?


As far as I know, he never finished it, so unless he still has and would give out a copy it is probably unobtainable.


This.

Else you might as well call the degree "rote" because that's what you measure.

> maybe fails in intro courses

If someone cheats their way through intro courses they are up for a bad surprise since the upper level courses assumes a mastery of the intro classes!


Interesting, thanks for the replies.

I'd still point out that the system is not robust against: - Working in groups. (Bad if tests are supposed to assess individual performance.) - Asking outsiders for help.

Also, your downplaying of "rote" learning feels misguided, no matter how advanced/abstract/high-level the domain in question is. Cue the Bruce Lee quote about 10,000 kicks..


> Working in groups.

Students hate carrying dead weight so to speak. Sure they might be tempted to swap favors (A helps B for a certain subject and the reverse is true for an other one). If the exam is properly constructed you might be able to detect plagiarism however.

Rote is absolutely necessary for a well rounded education, but I really feel the need to overcorrect in the opposite direction because testing for rote is the lazy approach. Over time a lot of testing has the tendency to shift to simply measuring rote.


Just imagine a world were solving problems in groups and asking for outside help is bad.

We are so used to exams being about doing stuff that literally nobody in real life would ever even consider.


I was aware of the implications, hence the disclaimer "(Bad if ...)"

However, I'd assume that doing either during a test would be against the honor code.


I had a professor that intentionally asked a question which wikipedia has the wrong answer... Called out and shamed everyone who used wikipedia as a source.


That reminds me of when I took and distance learning electronics technician course. You could Google (search) all you wanted to but seeing answers didn't help in knowing how to solve the equations.


I think the most important part of "proctoring" exams should be that it is individual work. Let's face it- you will almost always have access to the internet. The only thing that should matter is that the assigned student can solve the problem _in some manner._ This may include the internet, notes, or really anything.

The real problem is people paying others to take exams, or simply copying answers from others. I work as an independent tutor, and I get contacted far too much by people looking to have me take their exams for them.


That was certainly a problem in person too. At my undergrad, for larger exams you would just pass your ID down the row to the person in the aisle and a TA would collect them all then check everyone in. Wouldn't be hard to just hand your ringer your ID to pass on up. Not unusual to look nothing like your freshman year ID photo in person.


I was an undergrad at Rice where we had a very strong honor code. Unsupervised exams were common, and to the best of my knowledge cheating was very rare. If you were accused of cheating then you went before the Honor Council, which was run entirely by students.

I considered this an enormous success. It created an atmosphere where we were treated with respect, and where we were expected to treat others in the same way. If someone did cheat, then they would certainly be too ashamed to admit to it. It was a community where I was proud to spend four years.

Yes, I would say that Rice students are honorable -- without the scare quotes.

How is this achieved in practice? Wish I knew. This is the sort of thing that every university, and every organization, wants to achieve. Trust is very difficult to build if it's not already there; and it's much easier to erode.


> How is this achieved in practice?

You were accountable to each other, not to an authority. Authority gets you enough to do the bare minimum to not get fired or not get caught.

Being accountable to each other feels very very different. It's tough to explain in an HN post but anyone who has felt the difference knows what I meant. You care what your peers think. You care about that authority only enough for them to leave you alone.

Never let it be said that shame isn't an effective tool.


> Never let it be said that shame isn't an effective tool.

This is true, but at the same time I think shame as a motivator is often really unhealthy. People can do all kinds of terrible things to avoid shame: at the extremes, witness things like family annihilators or so-called “honor killings”.

I’ve been reading a book about Midway lately so this is on the brain, but a culture of shame is part of the complex that prevented the Japanese from reeling in insubordinate but “patriotic” junior officers, and from realistically assessing their own and their opponents’ capabilities. Meanwhile the modern process for evaluating air accidents (constantly applauded on HN) explicitly removes the question of blame and shame.

So, I’m not really saying anything about Rice university. I’m sure they don’t have to worry about honor killings on campus. This has just been on my mind lately. Just saying that without social release valves for shame you can get some really wild consequences.

PS for anyone interested in Midway, read Shattered Sword for everything you ever wanted to know about early-war Japanese flight deck operations.


> How is this achieved in practice? Wish I knew.

I think my take-home from this discussion is that the honor code can be made to work in the right circumstances that exist, at least, at Caltech, Rice, etc.

At the same time, I believe it is impossible to induce the requisite "cohesion" in other contexts such as 100% remote learning or high-stakes mass testing (entrance exams etc.), even if the student body stayed the same.


> high-stakes mass testing (entrance exams etc.)

That's because it's done wrong.

There are countries where they simply sort by descending scores on whatever standard test they came up with to admit. As if the test was perfect and results could be compared at an infinite decimal place (hint, stats and physics disagree with that!).

So the test basically becomes a measure of how someone is good at taking the test and you start to see min-maxing behaviors. It's a contest to see who can pour the most time into maximizing its results. The incentives for cheating are simply so high. The downside of cheating is that you don't really learn, but since the only skill these tests teach you is taking the tests, you are really robbing yourself of a skill that becomes useless 3 minutes after the test.

My advice is to throw these tests in the trash.


Fellow Rice graduate here. Looking back, being treated with trust and respect - after a decade of the opposite treatment at the hands of middle and high school - was a real turning point for me. We were trusted to make our own choices, and expected to reciprocate. In my experience, this leads to fewer cases of anti-social behavior. Which is more trustworthy - a company that simply expects tasks to be completed well and on-time, vs. a company that takes screenshots of everyone's machine and tracks bathroom breaks? I find it's overwhelmingly the former. Granted, the causal arrow may go either way.

The council does have teeth. My friend was sent in and found guilty of cheating on a test. They got kicked out in short order. The straight-A student they'd allegedly copied the answers from got a few months' suspension. I never got an explanation as to why the sentence differed.

No system is perfect.

A professor questioned me as a witness at the "student council" hearing. I expected a student to do the job. The questions were all worded to make my testimony skew towards a guilty verdict, calling my own integrity into question. This is expected if there was a mechanism for an organized defense, but I was never questioned by the "defense attorney". I left the questioning disturbed and frankly afraid for the safety of my own academic record despite not having done anything wrong. Guilt by association. It felt like a witch hunt and a front for implicit faculty power. My friend still denies wrongdoing and had to rebuild their degree from scratch at another institution. They are still proud of the time spent at Rice, even though they feel their degree was taken away unfairly. The only evidence I heard against my friend was that they had made the same mathematical error as the other implicated party.

My first take-away from the situation was that organizations based on trust work well, but it is critical to have robust mechanisms in place for dealing with moments where that trust gets called into question. It is human nature to take trust away faster than it is given, even if it turns out there was no wrongdoing.

My second take-away is that organizations based on trust tend to punish violations disproportionately, especially if the violation reflects badly upon the group.

My third take-away (only realized years after the fact) was that trust-based systems tend to create a tyranny of implicit rules that tend to exclude newcomers (like students) unless specifically addressed.

I'm still very much in favor of a high-trust environment, but it is far from a cure-all. I find it's a prerequisite to a robust system, but must be paired with counterbalances.

For my beef with the Student Council, Rice was an overwhelmingly positive and nurturing experience. If I ever decide to get another degree, that's where I'll go, and I always recommend it to prospective students with an independent streak.


I had these at my university (not Caltech). These take home exams are much, much harder than anything they can give you in a timed in-person exam. It’s not even close.

This is simply because the set of problems a person can solve in 3hrs limits the scope of the questions that could be asked. In fact, my friends and I would make lists of things we were sure wouldn’t be on the exam due to time constraints.

I have to say that nobody cheated. This was partly due to my physics graduating class being 12 students.


Regarding "penalizing honesty," I think it actually rewards honestly quite greatly—because of the Honor Code's importance in Caltech culture (i.e. one of the undergrad application essays is specifically about the Honor Code), it means that everyone gets to benefit from take-home exams.

I guess what I'm trying to say is that the "implied notion that Caltech students are... honest" is, in my experience, largely true (possibly because Caltech tries to select people who think the Honor Code is good from the get-go).


My counterarguments, in good faith:

Clearly, any society collectively benefits from honesty, whereas an honest act is, at least in a strict game-theoretic view, a loss to the individual in the short term.

The question "Are you honest?" does not necessarily filter out dishonest people. (Incidentally, I always felt like I would not have been admitted to Caltech as a student.)


Insofar as universities produce pedigrees, it doesn’t really matter if Caltech students cheat. I suppose there’s greater justice in a student cheating on an exam and getting good grades while nobody knows - the student already won the lottery, they are getting the degree and a floor of a good lifestyle.

Yet they will be less likely to commit suicide, you know? Or some other act of desperation.

It’s a success because it lets students quietly cheat - if they need to - without reducing the graduation rate.


I'm currently taking online classes at a program which has had entirely online offerings for over a decade. Out of the 10+ classes I've taken so far, I'm currently enrolled in the only one that has required a lockdown browser (I haven't taken any tests yet, so I'm not sure what that will be like). Up until now, all the classes just had me sign something saying I didn't cheat. Also, usually the test was designed in such a way as to make it more difficult to cheat. About half of the classes, the tests are open-note. They combat the effectiveness of cheating by making the time of the test short enough to where if you don't know the information, you won't b able to look up more than 25% of the questions before time runs out. I can see how that wouldn't work for all classes, and admittedly I haven't taken any Math or Science courses so it may be different in that arena, but it seems like they have decided to take a different route than the schools mentioned in the article, and presumably haven't had any issues.


I'm an undergrad studying biochem, and my professors have also made their exams open-notes but timed. There are more short free-response questions than previous years. Also, the questions are focused on synthesizing the material rather than recalling minutiae. I think it's a good compromise, especially since you can't rely on Ctrl+F when you have to reason beyond the material in your own words.


Caltech had tremendous success with a culture of honesty and take-home exams.

"Can I use Feynman?"


We must do all things at scale and perfectly standardized! Only multiple choice tests that can be graded by a machine! Make sure the teachers never have a one on one conversation with students! It’s easy if we make sure they don’t have time and resources for that.

Imagine how much harder interviewing would be without having a one on one conversation. And if it had to be perfectly standardized nationwide. It would have a ton less bias, but you’d have a terrible signal and have to do all these crazy things to prevent cheating. In other words, these seem even worse than the state of tech interviewing.


I took a remote amateur radio exam twice. The coordinator (proctor) asked to share the whole screen, scan the room with another device, and that's it!

Just enough steps to have a degree of assurance that I'm not cheating, but not going overboard.


Examiners don't need to absolutely rule out a cheat, they just need to be sure that a cheating examinee would not be able to absolutely rule out cheat detection. Demonstrating absence of cheats isn't like some captcha that you just take again and again if you fail, as often as you like.


A solution for both this, and the problems with exams in general, is oral exams. Student has to answer questions from the teacher, in real time, one-on-one. Takes longer, but is much more authentic.


This tosses in a whole other layer of stress and meta-game managing the relationship with the teacher.

I do not do anywhere close to my best work with someone staring at my every move.

For undergrad, I'm much more in favor of deliverable-based approaches. Term projects, problem sets, etc. I think it's cruel to boil such a high-stakes decision to a single-point-of-failure event like a big test. I've heard too many stories where someone had a bad event happen before a test, or just slept poorly, or made a trivial mistake, or got sick, only to be told "too bad".

If cheating is a huge problem, locking down the final step isn't going to fix anything in the long run. It destroys all trust. We need to take a look at the motivations to cheat, even if these reasons are uncomfortable.


> I do not do anywhere close to my best work with someone staring at my every move.

And some people thrive with others around.

Eventually you'll find an excuse for why anything is unfair..."too much pressure!" "not enough time!"

And we wonder why, by and large, college degrees become more worthless, and academic vigor sinks like a stone.

Maybe we should just accept that college should be hard, that tests should be challenging, and instead, remove the extra requirement for college for most people. Because if we just make it a joke it's pointless anyway.


I think in the US some class sizes are in the hundreds, so this does not seem scalable for their case.


If they're having trouble proctoring their exams, perhaps they're admitting too many students.


I think proctoring in-person was fine. The issue is only because it's remote.


In all my 300 student classes, we had recitation sections with maybe 20 students to a TA or teaching professor to go over the homework/take quizzes/serve as extended office hours beyond the time the professor and this TA already made available themselves every week. It wouldn't be much more work for the TA to give each student in their section a 1 on 1 hour meeting over the course of a week or two.


Some but for my college at least very few. I think I took one class with more than 100 people in it


Tbh, I thought I'd like oral exams - but for whatever reason, I always bombed them. Even exams where I thought I did good, I'd completely bomb.

Because oral exams are quite time limited, you really only have one shot at each sub-question. And how good you are, is 100% subjective to the people judging you - and since this is in person, lots of things can influence their judgement. Hell, they may even have some latent bias.

And as others have mentioned, it brings a bunch of stress on the candidate.

I guess it's a very personal preference. I always did better on home exams, the stress-level was pretty much non-existent, compared to the others.


"Hey bro, I'll pay you $50 to let me listen while the teacher asks you questions"


Dealing with cheating is a complete drag, but I would still never use any of this nonsense if I were teaching a class. It's horribly intrusive and dedicated cheaters are still likely to figure out a workaround.

I'd much rather just assign take-home exams and final projects, or use a synchronous 1-hour exam that everyone takes at the same time.


At what point can we stop pretending most universities prepare you for the job market?

If your a professor that cares about Google give a rapid oral exam. If your a college professor giving multiple choice or other memorization based formats, what value are you providing over free online courses? Frankly, if your institution "has" to use this sort of software, it's not worth attending.


While I agree with you that most universities don't generally prepare you for the job market (especially when it comes to CS), this is not an option for almost all students:

> if your institution "has" to use this sort of software, it's not worth attending

Because of the sheer number of applicants in many places, whether a person has a relevant college degree and how much they scored in college is often the first "filter" jobs apply while sorting candidates, and if you can't get past that, it's essentially impossible for you to get a job.

Students will stop going to college for courses that don't prepare them for the job market when the job market stops requiring them to do so, which definitely isn't the case right now.


Maybe we could Split up degrees into courses allow them fully independent. At least that can help alot because now it allow mixing and matching


Proctored exams are as smooth as having proctologist to look remotely through your webcam


The older I get the more absurd the years of my life wasted in schooling become.

Being tested by people who mostly have never even worked the job you're supposedly preparing for using methods that don't represent the way you'll actually be working seems a complete waste of time and money. Increasingly the more talented individuals I'm working with in my job I see more and more that never even went through any of that in the first place.

I can imagine a way that it would be valuable but really you'd have to start from scratch, there is too much cruft.


> The older I get the more absurd the years of my life wasted in schooling become

The more time I spend working in my chosen field, the more I appreciate the time I spent studying unrelated subjects.


I dont think school would have done me much either way, but I really do wish I put more effort into learning certain subjects I wrote off as useless a teenager.


> one WLU professor wrote that anyone who wished to use foam noise-cancelling ear plugs must “in plain view of your webcam … place the ear plugs on your desk and use a hard object to hit each ear plug before putting it in your ear—if they are indeed just foam ear plugs they will not be harmed.”

Nitpick but noise-cancelling is by definition an active electronic process, and so will never be a purely foam ear plug. This is Vice misreporting what the professor said (professor's words were "ear plugs for noise reduction").


Can we not just replace exams for meaningful coursework? We already have pretty decent software for testing uniqueness, etc.

This does of course put the burden back onto the teaching staff, give them a nice bonus to mark the work. Given school can shutdown their buildings they can save on costs such as heating, air conditioning, cleaning, electric, gas, maintenance, etc - there should be a little extra budget left-over for a marking bonus.


Here if a fun thing. In chrome whenever you grant we website to use your camera and mic you now have that web site access to view all cameras and listen to all mics on your computer at the same time


A random tidbit I've heard floating around: Examplify (yet another proctor) will think that you're running in a VM if you have WSL installed.


You sort of are. The main OS is virtualized too under WSL2. Both it and WSL2's kernels running on top of hyper-v.


That doesn't apply only to WSL2. As soon as you install the Hyper-V role (for WSL2 or to virtualize other VMs), your main OS becomes a VM with special hardware privileges (the Root Partition, in Hyper-V parlance), on top of the Hyper-V hypervisor kernel.


Hypothetical question:

What if the client side of this program was open-source, and all it did was record the screen and web camera, and upload a copy to a server that was run by the school on premises.

Would this change your view on the software or make you more likely to agree, however begrudgingly, with the use of this kind of program?


With a lot more people taking courses remotely, I expect some universities will take the approach Amazon does and have "test centers" with proctors to make sure there are no shenanigans. That, and a lot more coursework / open book exams instead of traditional tests.


I know a few people in college now, and my advice to them is to run all this crap in a VM. That way it can be easily and harmlessly blown away without affecting the workability of their main learning tool (computer)


I recently wrote a coding exam at uni, the software would randomly delete 'unnecessary' whitespace. We mostly code in python.


I've heard some good stories of VMs and obs to circumvent all this insane stuff.

It's good. I remember learning about networks to pirate stuff back in the day and that's much less of a thing now. Now kids have their own incredibly stupid invented problem they can easily work around if they learn some tech stuff, and a lot of them are familiar with obs from streaming anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: