I learned Python probably over 15 years ago. Haven't used it very much and certainly not followed all the news.
I remember back then it was said the design philosphy was something like there should be one obvious way to code something correctly. In opposite to Perl where the philosohpy was that human thoughts can take weird ways and you should be able to put your thoughts to code directly. (Both characterizations from memory.)
Today I hear little from Perl. And with new syntax added to Python in every version I start to wonder how far away Python is drifting away from that original characterization above.
One way was never going to work, ever. No real substantive platform of any kind will ever have one way to do something. May be if you're a literalist, there can sometimes be an "obvious way" to do something with emphasis on "obvious," but beyond that, the moment you have more than one library in the ecosystem that does a similar thing, you no longer can do something "one way."
I'm not sure if today that really matters though. I will admit I don't know much about Perl but Perl's death vs Python's life probably might have some influence from design choices but a lot of it just has to do with the libraries you have access to with Python. I know as a scientist, Python is the game in town, that has more to do with why I use it than design.
The "one way" philosophy worked really well for Python for 15-20 years.
It was really only during the break-away adoption by the data science community (and other mass audiences) that started adding pressure to bloat the language. The adoption of 3rd party libraries like numpy, which are so critical to modern Python, also destroyed the community's "batteries included" philosophy.
Older Python felt a lot like using FreeBSD, where the whole system is cohesive and designed to work together, whereas modern Python feels a lot more like Linux – a bunch of disparate systems that try to work well together (in the best of times).
Both approaches have their strengths, and the language has certainly improved in some areas, but I prefer the older style.
In any case, it is fascinating to see how mainstream Python is now. Maybe this change in mindset was required for it to take over the world. We've come so far since Paul Graham cited the language as esoteric and its usage a high signal of competence (http://www.paulgraham.com/pypar.html).
"The adoption of 3rd party libraries like numpy, which are so critical to modern Python, also destroyed the community's "batteries included" philosophy."
You are right and your FreeBSD vs Linux analogy really hits the target.
Thanks to the advancement of hardare and the development of libraries like Numpy, Python has moved from it's scripting niche and became a global, general purpose language.
Was that an error? If we look at the "batteries included" philosophy and the poor state of Python's package management after all these years...I'd say yes.
When I whined about Python's import system in another comment I completely forgot to add some snark concerning the horrible, horrible state of everything to do with Python package management system. Here again, many failed efforts over the years (setuptools, pip, ...), a terrifyingly convoluted packaging system, and bad, highly complex plans (virtualenv) to heal problems that are self-inflicted
Asking people to run `pip install` on anything but `poetry` is an anti-pattern to me. Ruby has had bundler for over a decade and this is the pattern that is working for them as well as npm/yarn. Npm comes with node, Cargo comes with Rust so the debate is somewhat reduced.
Since quality of life tools and other soft things are hard to prove, I'll tell my anecdata story point. I sought out any package manager experience in python. I landed on Pipenv which uses pip. It failed to solve the tree. This led me to find poetry and the reason for existing which is exactly my experience. That was 2-3 years ago.
In legacy (don't break anything) mode, there's still no reason to not switch. I export `requirements.txt` with poetry just for pip legacy reasons and it works great. If I just update some scripts, I could avoid it. It's running all the time in CI, it's exercised quite a bit.
What's wrong with just using pip and requirements.txt? There's no dev section. In addition, bumping deps is not the same. I have a blog post explaining semver updates to a python dev:
my strong assertion: Python and Go missed it from the start. That's why it is so confusing. There's no other choice in Rust but Cargo. Rust devs are never confused on how to add a package, semver it. The answer is always Cargo. It's in the tutorial. It's in the book. It's in the culture.
I think I've heard that pip might support the pyproject spec, poetry already does. If you want scripts like npm, you can have that too with "taskipy". You don't have to.
It's really been a mistake for the Python core team to not figure out a better way of integration with numpy. I'm pretty sure back in the 2000s the numpy team was ready to do work on this.
Syntax integration would be great. It's such a hassle to learn two syntaxes and two data/object models to work with numpy -- as inevitably your code converts from Python objects <-> numpy objects at some point (singletons!), and it makes learners' lives much more difficult than needed.
As I recall there were not one, but three libraries providing similar functionality — numpy, numeric and numarray. In the mid-2000s I was using all of them for various reasons. It took quite a while before numpy emerged as the defacto standard.
I wonder if there was no adoption into the language at that time as a result of fractured communities and lack of consensus.
Perl's popularity waned because of the whole Perl 6 fiasco, which stole lots of energy from Perl 5, offered no migration path from Perl 5, and eventually evolved into a completely different language. As a result, Perl 5 development stalled, and developers went elsewhere. I personally stopped writing new Perl code around 2005-2006 after learning Python 2 in 2003-2004 (when I was doing a lot of Zope/Plone hacking).
I think Python 3 almost killed Python, too, and for the same reasons. The developers behind 2to3 saved the community, in my opinion, as did the folks who made the painful decision to publish a Python 2 end-of-life date sufficiently far enough into the future that people still using the old runtime had both a sense of urgency and plenty of time to complete their migrations. I finally made the leap to Python 3 around 2016. I'd like to claim that I jumped instead of being pushed, but the reality is that by then, people were developing new stuff I wanted badly on Python 3 exclusively.
I think "six" proved more useful than 2to3. This made it easier to support both Python 2 and Python 3 in the same code base, which (IMO) resulted in a smoother migration path.
You're absolutely right, and again, I think that reinforces how important a good migration path was to Python 3. Without that path, I think there was a real risk of Python dying off same as Perl 5.
It's sad because there was a lot I liked about Perl. I wish that community had better leadership because from my quick glance at the Wikipedia page, it looks like the Perl 7 project is making the same mistakes as Perl 6/Raku did. It's really too bad.
Perl's popularity was already in a nosedive by that point. I was programming in Perl much prior, was on numerous mailing lists, etc. The dropoff was startling.
Frankly, I believe the terrible state of affairs on CPAN was part of the problem, but it was initiated by the "there's more than one way to do it" mentality. If I wanted to do something where it would be a good idea to have a library for that task, I would have to search many candidates, each of which did sixty to eighty percent of whatever, then see which were long-abandoned, did what they actually managed to write match what I needed, etc.
I view that section as a commentary about the concept of "orthogonally", arising from Algol 68. Quoting van Wijngaarden:
> The number of independent primitive concepts has been minimized in order that the language be easy to describe, to learn, and to implement. On the other hand, these concepts have been applied “orthogonally” in order to maximize the expressive power of the language while trying to avoid deleterious superfluities
Viewing through this lens, the Python Zen's "obvious way" refers to concepts in the language, and not the wider ecosystem which includes libraries.
> refers to concepts in the language, and not the wider ecosystem which includes libraries.
Sure, but tests in the submission (not talking about further discussions) are only about the language. And with so much new syntax added over the years I doubt that there can be only one obvious way for many problems.
"I doubt" is just my feeling. I have not make any efforts to demonstrate this by examples. And it wouldn't be easy for me because I am not very fluent with many of the newer features of the language.
async/await was actually added in 3.5 (although the @ test in since-3.5.py is shorter). The specific change used by since-3.7.py is that await is now allowed inside an f-string (inside an async function). The specific change used by since-3.11.py is that an async comprehension is now allowed inside a normal comprehension (inside an async function). These changes are, of course, not new primitives—just fixes to allow existing primitives to be combined in ways that should intuitively work.
All-in-all I think Python has done a remarkably good job at living up to that quote, with three caveats:
The first is that I think actually achieving a hard-mode form of that quote and maintaining it through the lifetime of the language is impossible in practice and theory. (Even "mostly" living up to the quote is an achievement.)
The second is that I am unsure about the obvious qualifier. Certainly I can recognize pythonic and non-pythonic constructs in python; but whether the difference is actually obvious or just a result of experience combining with community consensus I can't say.
The third is that I don't think the language is Pareto efficient between readability and most other variables. For instance, I've had code criticized as non-pythonic that was just as readable as the offered pythonic version, but had greater run-time efficiency. (To be fair, this is only an issue if you define the 'pythonic' way to be the 'one obvious way'.)
Your comment about libraries is quite relevant still - it's about an ideological preference where you preferably would not have many libraries in the ecosystem that do a similar thing without one of them being the default obvious one way to do it. "Batteries included" philosophy is one aspect to ensure that this happens, preferring to have a 'blessed' way to everyone choosing a different library which gets the advantage of each being slightly better tailored for the need at the (significant!) cost of fragmentation and thus readability and maintenance.
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Note that this describes a Zen state (as in the Western colloquial sense) and is supposed to be intentionally self-contradicting to provoke thoughts. It’s sometimes disappointing to see a lot of people follow it as rules or strong suggestions, which completely misses the point.
If you think about it as "strong suggestions", then yeah it seems obvious that they're suggestions for how to approach ambiguous situations. However, that's also missing the point a bit.
It's often more useful to think about it as "thought exercises when trying to understand the design philosophy". Note that it deliberately contradicts itself.
Similarly, things like "flat is better than nested" and "sparse is better than dense" are completely false a lot of the time. There are plenty of cases where the opposite is true for practical reasons like memory use or access patterns. The point is not necessarily to make those as statements or suggestions. The point is to give you something to think about. They're more like koans than suggestions.
Think about why and where they're not true as much as why and where they are. And don't take it too seriously, either way! The whole thing of "import this" is tongue in cheek, after all.
This hasn't been true for a long time. These days it feels like mentioning the original founding principles of Animal Farm after it went totalitarian.
It is from a different age, when intelligent people like Tim Peters still had influence and there was more of an academic atmosphere. These days it is about warming chairs, getting power, eliminating your enemies and speaking at conferences.
Namespaces are great but Python’s module system being coupled to file system structure kinda sucks. You either have overly-granular modules with manageable files, or you end up with multi-thousand line files that present a reasonable module organization. Or the totally unreasonable granular module structure for reasonable files with a facade set of modules that just reexport things.
You can split your big module into modules inside a sub-package and import stuff in the __init__.py file from the submodules to present a nice interface to the library user.
eg instead of:
namespace.py
class C1:
class C2:
do
namespace/__init__.py
from .c1 import C1
from .c2 import C2
namespace/c1.py
class C1:
namespace/c2.py
class C2:
What's wrong with facade modules? That seems to be a pretty common pattern, and it allows you to decouple your filesystem structure from module structure while still giving someone reading the source a breadcrumb to follow.
Quite apart from implementation challenges (eg, needing to parse every file in package upfront to know what's in there), how else would you see this working? Are there other interpreted languages that manage this indirection in a more elegant way than having some central file that supplies the mapping information?
There’s nothing wrong with it, per se, but it is more to maintain, and it’s a confusing indirection to downstream users that need to pry into your code to debug something.
Ruby has its own problems, but I like that the modules are independent from the file that contains them. You can “reopen” a module and declare new classes or constants or whatever. You have freedom (and responsibility) to organize your files in a way that maps to module namespaces. The drawback is the `require “my_file”` doesn’t give you any hint about what you’re importing.
The module system linked to file structure worked for Perl pretty well. Perl had this crazy idea that different authors should publish packages/modules with the same namespace, e.g.
IO.pm <-- author: Larry Wall
IO/File.pm <-- author: Me
IO/Socket.pm <-- author: You
IO/Socket/INET.pm <-- author: Larry Wall
IO/Socket/INET/Daemon.pm <-- author: Me
Simpler and encourages code reuse. Of course there is duplicate code in CPAN, but you really have to go out of your way to avoid an existing module that can do what you need.
NumPy also imports all of its subpackages, allowing:
>>> import numpy
>>> P = numpy.polynomial.polynomial.Polynomial
This results in a high startup cost if you just one one NumPy feature.
I've always thought this ran counter to "Explicit is better than implicit".
The NumPy developers think "practicality beats purity" is more important, and their main use-case is long-running programs where startup costs are slow, which I interpret as meaning their special cases is special enough to break the rules.
For one example, I had a command-line tool which needed to compute something related to hypergeometric distribution. (It's been a few years; I forget the details.)
This available in scipy, which need numpy. Most of my program's run-time was spent importing numpy. (The following timings are best-of-3.)
While 0.2 seconds doesn't seem like much to people used to spending hours developing a notebook, or running some large matrix computation, I could make my program 8x faster by writing the dozen or so lines I needed to evaluate that function myself.
Numpy is not about making short-lived programs fast.
In any case, this specific choice of importing all submodules is not to cache things up "for obvious [performance] reasons", because there is no machine performance improvements.
Instead, it's to make the API easier/faster to use. When you're in a notebook and you need numpy.foo.bar() you can just use it, and not have to go up and use an import statement first.
While my observation is that most other Python packages don't import all subpackages.
"import urllib" does not also import urllib.parse, so you can't do:
import urllib
urllib.parse.quote("A&W")
but instead must explicitly import urllib.parse.
scikit-learn supports essentially the same use cases as NumPy and it doesn't import all its subpackages:
>>> import sklearn
>>> sklearn.linear_model
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'sklearn' has no attribute 'linear_model'
>>> import sklearn.linear_model
>>> sklearn.linear_model
<module 'sklearn.linear_model' from '[...]sklearn/linear_model/__init__.py'>
Thus leading to my earlier comment:
> The NumPy developers think "practicality beats purity" is more important, and their main use-case is long-running programs where startup costs are slow, which I interpret as meaning their special cases is special enough to break the rules.
Not so known fact: Zen of Python refers to internal CPython design, and not to external application of the language itself. So, any new syntax is fine while one can wrap their head around implementation.
No, it was prompted by a post to comp.lang.python asking for guidance for new Python users. There's a link to that post in the PEP references (https://peps.python.org/pep-0020/)
Some things that should be easy are ridiculously hard in Python. Regexes for one have a horrible API story. Another one is the import system which I view as a totally broken effort to heal self-inflicted complexity with more complexity. Here is the code I came up when presented with the task to import a module when its name is given:
Details may change any time without notice. The complexity here is a consequence of Python's design decision to treat module names as identifiers, not as quoted strings. The import system is full of such poor choices (dot/relative imports, `__init__.py`, `*.pyc` / `*.pyo` files littering one's source directories, and so on)
Agree on regex. I started in perl many years ago and loved how integrated regexes were. Was appalled that was was a simple one liner in perl turned into multiple lines in python for every regex.
Making a powerful tool easy to use means it will be used more.
It would be nice if that flaw could be fixed, as I like coding in python in general
count = 0
t1 = time.time()
for line in open("nucleosome.pdb"):
if line =~ m/(ATOM |HETATM)/:
count += 1
print count, "atoms in", time.time()-t1, "seconds"
It used it own PLY-based parser to generate the Python AST.
I realize now how much of a hack it is. The portion 'm/(ATOM |HETATM)/' could be part of valid Python expression, as:
>>> ATOM = HETATM = 1
>>> m = 2
>>> m/(ATOM |HETATM)/1
2.0
so either m// is enabled only after =~, or there has to be some way to recognize the lexically correct match term -- something more powerful than the "m/[^/]*/[a-z]*" used here.
I have to say I really like Python's module system. Modules as objects, scoped import by default and filesystem-backed module resolution (+sys.path) are all good choices in my opinion. Cosmetic warts like __pycache__ are unfortunate, but not deal-breakers.
My main memory of perl (I used it for CGI and local CLI tooling back in the day) was that it was usually very terse, to the point of being a write-only language. I learned an important lesson with it: the more clever you feel writing code, the more you are likely to hate maintaining it after a few months away!
There was a long time period (IMHO good few years in early 2000s) where Python looked better on paper but Perl was still better for production.
All the Perl libraries for doing certain things (db access, web stuff, process management) were "actually battle tested" and worked well even if they were utterly unreadable. Python looked a lot nicer but when you tried to do any real work with it everything seemed to leak memory or give you continuous paper cuts - and it was a lot slower.
I made the switch to Python eventually but it took a while.
That tallies with when I was using Perl (late 90s / early 00s). Python was very much on the up in that time but I didn't feel compelled to switch, then DayJob and personal projects took me away from such thing completely (DayJob was Windows/VB6/IIS/ASP based, home web stuff went PHP for a while and admin stuff to Bash).
The problem for me with logging is that it's slow for production code:
Python 2.7.15 Linux (logging is 60x slower):
[root@hbtest ~]# py -m timeit -s 'import logging' "logging.info('something happened')"
1000000 loops, best of 3: 1.31 usec per loop
[root@hbtest ~]# py -m timeit -s 'import sys; debug = False' "if debug: print >>sys.stderr, 'something happened'"
10000000 loops, best of 3: 0.0216 usec per loop
Python 2.7.15 OSX (logging is 71x slower):
$ py -m timeit -s 'import logging' "logging.info('something happened')"
1000000 loops, best of 3: 0.925 usec per loop
[jim@mbp hbrel]$ py -m timeit -s 'import sys; debug = False' "if debug: print >>sys.stderr, 'something happened'"
100000000 loops, best of 3: 0.013 usec per loop
Python 3.6.8 Linux (logging is 99x slower):
[root@hbtest ~]# python3 -m timeit -s 'import logging' "logging.info('something happened')"
1000000 loops, best of 3: 1.62 usec per loop
[root@hbtest ~]# python3 -m timeit -s 'import sys; debug = False' "if debug: print >>sys.stderr, 'something happened'"
100000000 loops, best of 3: 0.0163 usec per loop
And, as usual, Python 3 is 25% slower than Python 2 for the logging case (but 2x faster for the noop if debug case). I hope the recent efforts to increase Python 3 performance are successful.
At some point those two philosophies cross over, that's where you end up with bash or even php, there are several right ways, and several wrong ways, and all of them work.
I don't ever think Python has lived up to that tenet. If they did, then there wouldn't be a million pull requests with comments saying something to the effect of "the pythonic way to do this is..." or "pipXYZ says you should/shouldn't..."
Development is no longer about the language or elegance. It is about the old boys and their subordinates getting and maintaining positions at large corporations.
Which requires politics, churn, adding non-essential features and covering up mistakes by the old boys.
If the source file contains syntax that is invalid on the user's Python version, then your code that's supposed to check the version and throw a helpful exception will never run. Instead the Python interpreter will throw a syntax error the first time it tries to parse something it doesn't recognize, and that syntax error may not make it obvious what the problem is.
If you're trying to make a version-robust module that won't attempt to compile a syntax error, top-level dispatch through __init__.py is an excellent approach.
If somebody else is running the code, it likely is packaged. Every python packaging I know allows to specify version of python to use so IMHO that's what should be used.
>If somebody else is running the code, it likely is packaged.
Partially because people avoid Python for things like installers and simple sharable single file scripts...because of this. There are ways around it in many other scripting languages. And a fairly good number of popular "single file scripts" that get shared, like mysqltuner (perl), adminer (php), etc.
I was about to ask, what is the goal of this? Is this to prevent users from removing version guard and force-running the code with older version anyway?
If it's a single file script, checking sys.version only works if all your code is syntactically correct for that version. It's a shortcoming of python's approach of parsing the whole file before handing control over to your code. Basically, the interpreter bombs before it ever runs your "sys.version" code.
It's actually a fairly practical approach, where you are controlling where the code is going to bomb out, and including a helpful comment that will show in the stack trace / exception.
With Perl, for example, you can have a BEGIN block to check for versions, and it works as expected, even if there's code in the main body that's too new for whatever version of Perl is running. Python doesn't have that ability unless your code is split across more than one file and using conditional imports. Some BEGIN block type functionality for python seems like it would have been useful to me.
I agree with your overall point, but a little nitpick is that your comparison would fail for future versions as well as past versions. Thanks to dictionary ordering on tuples, you can just test inequality directly:
As already pointed out, whole file is parsed before it's actually run. If you have any unsupported syntax, you will get a SyntaxError without your message being printed at all.
What would be your suggestion to do this for single file scripts?
Turn them into two-file scripts, or have a readme, or just let it fail because supporting every single version of Python is insane. This whole thing is just a classic case of the XY Problem[1]: a solution looking for a problem.
> Turn them into two-file scripts, or have a readme, or just let it fail because supporting every single version of Python is insane.
The point is obviously to let if fail, but clearly explain why it fails. This is obviously not meant for software distributed to users that are proficient in Python.
I guess it's useful for the kind of scripts you write and post in pastebin, or share on a web forum or whatever. Think a wrapper script that downloads and patches Wine so that it can run a certain exe or whatever. These users have never heard of pip, and you don't want to publish this stuff to pypi anyway. You just want to post it in a code block on forums.obscureindiegame.com so the three Linux users there can join the effin' game already!
Or so I would guess. Never felt a need for something similar myself.
But single-file script is still more convenient than two files. e.g. It is easier to scp or copy just one file than two files. Sometimes even if people already know they want to solve X problem, it might still make sense to solve Y problem because the solution to Y problem have some benefit in some cases.
What you just linked doesn't seem to be about solutions looking for problems. Rather it seems to be about people looking for solutions but being bad at expressing what the problems are.
Speaking of this kind of check... recently I have been trying to set up some scripts that could run on Mac with "nothing installed". I had been opting for Python (even using `urllib` to avoid having `requests` as an option) and... a good chunk of people ended up hitting weird errors with SSL certs.
I know there are probably other answers (I guess writing binaries in Go?), but given how nice Python is for glue code when it works, I do wish for something like `python --config-file pyproject.toml my-script.py` that would figure out requirements, install those "somewhere", ensure the right python version (installing if necessary) and then run the script in a consistent environment (PYTHONPATH being reset, for example).
I know there's a lot of tools for packaging Python code into their own binaries, but I think we could probably get pretty far with some built-in "install and run" tooling that doesn't start with "decide where to put a virtualenv"
Speaking of this kind of check... recently I have been trying to set up some scripts that could run on Mac with "nothing installed". I had been opting for Python
>trying to set up some scripts that could run on Mac with "nothing installed"
Perl still comes with MacOS 12.3. And Perl would be a decent language to bootstrap a working Python with working certificates, etc.
Though Apple has said all scripting languages will be removed from the default install eventually.
"Scripting language runtimes such as Python, Ruby, and Perl are included in macOS for compatibility with legacy software. Future versions of macOS won’t include scripting language runtimes by default, and might require you to install additional packages. If your software depends on scripting languages, it’s recommended that you bundle the runtime within the app."
>I guess writing binaries in Go?
Then you get the fun of signing, or README language to guide the user around adding an exception. They sure are making bootstrapping hard. I wish they would pick at least one stable scripting environment to bundle...lua maybe?
Does perl have a built in HTTP client? The author wanted to hit a URL, and if I recall perl didn't support that natively (I last used it professionally in 5.16 though, so I may be out of date).
If the state of the art in Perl is still "shell out to curl or whatnot" then that is really not portable or stable compared to Python's included urllib--SSL errors or not.
Not sure if this meets your goals or not, but have you looked at Pyinstaller[0]? This was designed to help with the problem of distributing python code with external dependencies included. (my assumption from your comment is that you're targeting "nothing installed" as a way of preventing end users from having to install packages on their external machine).
My use case is a... bit different. The idea is I want to be able to distribute scripts, but for those scripts to still be scripts (so that the users can fix issues and send PRs back to the project). But I want the scripts to be run in a more or less fixed environment. Or at the very least a "wipe everything and start over" solution that doesn't mess with other environments.
It's OK for me if there is a single binary that is "guaranteed" to work that also needs to be installed (for example "you have to install nix/tox"), similar to people having to install a JVM (which mostly works relative to Python).
I mean, clearly you could do this with a shell script, but I believe that tcl is installed by default on MacOs, so there's that too, both wish and tclsh are available so you could even have UI (as long as you don't care about accessibility)
I mean shell scripts are just annoying? You don't have even the basics like dictionaries, argparse and friends don't exist, some weird space can throw everything off (yes I know about shellcheck). Every common action is some different tool that you might or might not correctly use (let's not even get into implementation differences for stuff like `find` between Mac and Linux).
And yeah you can use curl for web requests. But when your problem then becomes "make a web request, take a chunk out of that" you're looking at curl _and_ jq. And clean error handling...
I know how to write shell scripts, but I think I hit their abstraction ceilings way too early, especially if I'm looking to write something that is maintainable by other people on the team. I know people talk about how every machine will have some shell, but when you think about it a bit more all the "nice" shell utils are not installed by default.
All the more power to people who can write clean shell scripts. I try to, but am not good at it. But I know how to write Python (or JS, or Java, or C if someone demanded I do it...)
Just being a little pedantic here, nop usually means actually no operations performed, not just "has no side effects". By this definition, this (6502asm) is also a nop
bcc @dummy
@dummy:
With the special case of an extra cycle just on the off chance the bcc is at the end of the current page. It indeed leaves no side-effects but I don't know if people would really seriously call it a nop
Even "true" noop instructions then ones generated by writing corresponding opcode in assembly often decode to "no side effect instruction". Something like exchanging register with itself, or adding 0 to register. Modern high performance cpu designs will probably optimize those instructions and not preform the useless no side effect operation, but on simpler cpus extra logic for detecting it and not performing anything might not be worth it. So those noops will actually try to execute the no side effect operation which matches with decoded instruction.
Well, if you take that into consideration then no nop instruction (of non-zero size) can exist in asm, because they still require advancing EIP (and decoding the instruction).
After looking at their 3.11 example, I wonder... Why can't we put a function definition in the same line of an if statement?
if True: def f(x): return x
Is this just because Python doesn't want to encourage extreme terseness? Or is it also because there would be problems or grammar ambiguities were this to be allowed?
"Only the [indented] form of a suite can contain nested compound statements; the following is illegal, mostly because it wouldn’t be clear to which if clause a following else clause would belong:
This reminds me of how people used to use machine code which exposed and tested for slight differences in processors to detect x86 processor versions before CPUID:
In my experience, the hardest part is figuring out what is the minimum version of python your script requires. I usually just put down the version I am using in the readme, but if it is enforced, it probably should be more accurate.
But that's not an error message; it's just a copy of the offending line in the file!
$ python2
Python 2.7.17 (default, Feb 27 2021, 15:10:58)
[GCC 7.5.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 0_0 # Python >= 3.6 is required
File "<stdin>", line 1
0_0 # Python >= 3.6 is required
That's the point. Since it can't actually parse the file, this is as good as it gets. It at least gets something on the screen that says "Python >= 3.6 is required" which is better than nothing.
Python 3.5 doesn't know the syntax is valid in 3.6. It knows it's invalid python 3.5 syntax, and then throws the exception. We see the comment in the process.
I'd be interested to know regardless - the @0 in the Python 3.9 example has me stumped. Maybe it's a decorator but I don't think so. It doesn't look like matrix multiplication either.
I think this specific change is making Python simpler. It was a weird exception that previously only a limited subset of all possible expressions could be used as a decorator. Now the rule is just "you can decorate a function or class with any expression".
You're right, this is simpler and more regular, but I don't feel that's enough of an improvement to warrant the changes required to implement it.
(The C Python interpreter is not the only code that relies on understanding Python code. For example, I used to use a set of tools called "Snakefood" but they aren't currently maintained. So as Python changes and changes they become less and less useful.)
In the PEP itself, under "How To Teach This" it even says, "the average Python programmer is likely unaware that the current restriction even exists."
And the "good example of code ... that would become more readable, idiomatic, and maintainable if the existing restrictions were relaxed." is not a good example. IMO (as I hinted at above) it's an example of code written by a programmer who is less well-practiced than one might hope.
The "Identity function hack" is part-way to the connect_button() function but it stops short at a goofy, ugly strawman function (that shadows '_'!? Why?) rather than factoring out the common parts of the expressions. Rookie mistake, eh?
(The eval hack is clever but stupid.)
So to me this seems like a poorly justified change that solves a non-problem.
Conversely, it gets easier to write outside tools that interact with Python code by removing corner cases and simplifying the grammar.
> In the PEP itself, under "How To Teach This" it even says, "the average Python programmer is likely unaware that the current restriction even exists."
Yup, and that's terrible. It means people are likely to trip over it.
> Conversely, it gets easier to write outside tools that interact with Python code by removing corner cases and simplifying the grammar.
That sounds like a false economy to me. A one time small savings of developer effort (implementing the old grammar rule for decorators wasn't onerous?) to permit foolish and unnecessary intricacies among the laity.
> It means people are likely to trip over it.
Only if they are attempting to do something foolish. (Who puts whole expressions in a decorator!?)
- - - -
Tell you what, you go find examples where people have tripped over it in the past and I'll code golf them to see if I can come up with something idiomatic and simpler. Does that sound like fun to you? ("Cause it does to me. Candy!)
Because, as mentioned several times in this discussion, if you give a python3 file to a python2 interpreter, you will get a SyntaxError even before the first line of the script runs. This is a way o ensuring the correct python version is used and to display a clear error message next to it.
So put the check at the top of the top level __init__.py. Move any invalid syntax in that file to an import. Is there any reason that would be a worse idea?
Brittle in which way? I wouldn't expect a maintenance release of an old python version to add support for syntax from a newer version, or a new version removing syntax for current syntax (if that happens we are on a new major version and failing is entirely acceptable)
What if someone makes a change to the pattern because they are maintaining the code but don't understand it? What if a future version of Python stops accepting that as valid syntax?
If python 4 comes along and stops supporting that syntax then it's fairly likely that the rest of the code will be broken too. That doesn't sound like a huge downside to me.
i wonder if that's a great idea.
if for any reason the syntax get ported to older versions it will be unlucky to rely on it. seems more secure to check python version normally.
I remember back then it was said the design philosphy was something like there should be one obvious way to code something correctly. In opposite to Perl where the philosohpy was that human thoughts can take weird ways and you should be able to put your thoughts to code directly. (Both characterizations from memory.)
Today I hear little from Perl. And with new syntax added to Python in every version I start to wonder how far away Python is drifting away from that original characterization above.